Internet Marketing

Note: This is an evolving tutorial with more to be added.

It seems that everyone wants to be on the Internet. And for good reason. It is safe to say it is the most important new method for conducting business in the last 50 years if not longer. Unfortunately, many marketers jumped in with little knowledge of how to do it right. Internet marketing experts agree that anyone who is interested in doing business on the Internet should take time to learn about what it is they are getting into.

SEM: Accessible URLs

As we saw in our discussion of Site Navigation, it is vital that web marketers create a site structure that easily allows search engines to negotiate through the site in order to locate content.  This means creating an internal linking structure that enables search engines to locate all important pages.

As we’ve discussed, search engines seek out links, found both internally (within the site) and externally (found through another site), that are then followed (i.e., crawled by search engine robot software) in order to index a website (i.e., gather information).  Yet by itself a link is not enough to open the door and let a search engine do its thing.  Marketers must understand that the URL (i.e., web address) contained in the link can pose problems to search engine indexing activity.

In this part of our Search Engine Marketing discussion tutorial we explore four important considerations for ensuring the URL contained in a link is accessible to search engine robots.  These considerations include:

Search Engine Friendly URLs

As a search engine traverses a site in its attempt to locate content, it actively looks for links that contain a URL.  (It should be noted that not all links contain URLs.  Some, for instance, will take a site visitor to another place on the same page and not to a different page.)  Individual webpages are identified with a unique web address or Uniform Resource Locator (URL) that generally includes the site name followed by other identifiers.  In the beginning of the web these identifiers were associated with the location of a unique HTML file and often took the form: http://sitename/foldername/filename

These pages are considered “static” HTML pages since each page and all elements of the page are stored as a file and do not change unless the website operator changes individual files.  For instance, if the website owner wants to make a design change that affects the entire site she/he would need to adjust ALL site pages individually. 

However, as we’ve discussed in previous parts of this tutorial, many of today’s websites are generated dynamically through a combination of special programming language and an associated database that stores website information.  Yet even though many sites no longer use individual files as webpages (i.e., page is created as the user visits the site), the web address of each webpage is still unique.

So what does this mean for search engine marketing?  In general, search engines have had little problem navigating through sites built with static HTML webpages but the same is not true for dynamic sites.  The problem is that dynamic sites present a web address that is based on the scripting program that is being run.  For instance, the URL for a search on for books dealing with “marketing strategy” may produce a URL that looks like this:

Historically, search engines have experienced difficulty in indexing pages with a dynamically generated web address (the reasons are mostly technical in nature and beyond the scope of this tutorial).  And while today’s search engines are much better at indexing these pages, marketers are still encouraged to avoid using dynamically generated URLs especially on important information content pages. 

To correct this problem, marketers should seek website software that allows for the creation of “search engine friendly” (SEF) URLs.  Essentially SEF URLs rewrite the dynamic URL in a form that is more representative of the URL of a static HTML page.  For instance, for the dynamic web address of this tutorial is:

By using methods of creating SEF URLs, the address for this tutorial takes on the more common HTML look:

For web marketers, who are not skilled at handling adjustments to web servers, moving their site to SEF URLs is something that should be done in discussion with their web operations staff or outside web hosting company.

URL Naming

The naming scheme of a URL should not only be presented in a manner that is friendly to search engine crawling activity, but it should also be descriptive of what is contained on the page.  As we will see, descriptive naming of URLs may serve to benefit not only the search engine but also people who are exposed to the URL.

In general, a descriptive URL name reflects the title of the page.  For instance, a page titled “FAQ” may carry the URL name  But what happens if the page is titled “Frequently Asked Questions” and not just “FAQ”?  The URL name could be but the lack of separation between the words in “frequently asked questions” presents two problems: one for humans and one for search engines.

The Human Problem

In cases where this URL is visibly displayed as the URL the actual description of the page may not be readily apparent to anyone seeing it in their browser.  For example, as we will see in more detail in a later tutorial, getting sites around the Internet to link back to a site is very important in search engine marketing.  In most cases a webpage’s URL is not visibly displayed in the link but instead is contained as linked text such as KnowThis FAQs.  However, in other cases a site will simply display the full URL such as “Here is a site you should see:”  Clearly, even though the page title is contained within the URL, the fact the words are connected may make it difficult for someone to quickly disseminate the purpose of the page behind the link.

The Search Engine Problem

It is believed that some search engines provide additional weighting to URLs that actually reflect the topic of the page but only if the URL is fully understood.  Unfortunately, run-on words are difficult for search engines to understand.  For instance, consider a page that contains an article possessing the following URL:

Without separation the wording may leave open to interpretation the real title of the article.  For instance, the title could actually be one of the following:

  • Matts Mart in Ideal Position – possibly referring to a retail store
  • Matt Smart in Ideal Position – possibly referring to a person named Matt Smart
  • Matts Martini Deal Position – possibly referring to a company named Matt’s Martini

To overcome these problems web marketers should learn to use descriptive URL naming and also to separate words in the URL.  The method for separation is one that is open to much debate in the search engine marketing community, though a “character” separator is probably best.  The options most frequently used for separation are the hyphen (dash) “-” character and underscore “_” .character.  While the debate on which character is best will not be taken up in this tutorial, it is important for web marketers to use some type of separator between words in a URL.  This applies to both individual content naming and to categories/folders in which a content item is contained.

Session Identifiers

Another potential concern with URLs is the use of so called “session identifiers”.  A session identifier is a unique value that some websites assign to each visitor (including search engine robots) when they enter a site and is often appended to a webpage’s URL.  These identifiers are intended to aid in research gathering by allowing the web marketer to track individual visitors as they maneuver through the site. 

However, from a search engine’s point of view the inclusion of a unique session ID within the URL signals to the search engine that a new webpage exists since there is a new, unique URL.  Even though the page content is the same for all visitors to a page, the addition of the session ID to the URL suggests to search engines that a new page is now available on the site and, thus, is indexed as a new page.

Unfortunately, session IDs last only as long as the human or search robot visitor is on the site.  Consequently, every time a search engine returns to re-index a page, the page will not be found since the URL containing a session ID is no longer valid.  This is important because the algorithms used by search engines to determine rankings to a keyword search are much more receptive to webpages where the content is associated with a stable URL.  In this way when the search engine lists the site in a keyword query clicking on the link will actually direct the search engine user to a working page.  This, of course, improves the search engine’s ability to satisfy its users.  For search engines rewarding sites having stable URLs is one way to ensure quality when delivering search results.  They know users do not want to be presented with a result to their search query that when clicked leads to a “page not found” message.

Obviously, web marketers that use session IDs to gain insight into site visitor activity must consider the cost/benefit of dropping session IDs in favor of potentially improved search engine traffic.  The good news is that search engines are improving their ability to recognize session IDs and strip these from a URL without affecting the link.  But this ability is still inconsistent and web marketers seeking improved search engine traffic are advised to consider removing session IDs from their site’s URLs.

Password Protected Content

One final consideration for allowing search engines to access a site is to understand that content that is password protected will not be indexed.  This applies to whether individual content items are protected (e.g., articles) or whether the entire site is only accessible via password.  If the door is closed to the public it is most likely closed to search engines. 

Sites whose business model restricts access to some or all content but still want search engines to locate information, should consider offering non-restricted access to summaries or abstracts of password-protected materials.  For instance, the site can make available to the general public the title of restricted content along with some additional details, such as the first paragraph of written material.  In this way search engines are exposed to a portion of the content which is better than not having access at all.

SEM: Site Navigation

Long before a search engine can ever understand whether a webpage will satisfy a user’s search inquiry, the search engine must be in a position to locate the content.  As we discussed, search engines use robot software to scour the Internet looking for websites.  As we alluded to in an early section of the tutorial, search engines gather information about websites by sending out software “robots” (a.k.a.,. spiders, crawlers) to scan the Internet.  To find their way from one site to another and to navigate within a single site search engines robots locate and follow links.  These automated programs locate information, retrieve the data and store what is found in massive databases.  Once additional software analyzes or “indexes” the acquired material it can then be made available to respond to a user’s search queries.

For marketers the process used by search engines to build their information repositories needs to be understood if a marketer’s website content is to be fully included.  A marketer has no chance of using search engines to drive traffic to their site if information on the site is not contained within a search engine’s database.  This, of course, means the site must be accessible to search engine robots.

But how does a search engine know where to find websites?  In some cases a marketer can send a message to the search engine letting it know the web address of content.  For instance, all three major search engines Google, MSN and Yahoo have online forms that allow marketers to submit a site’s main URL to trigger crawling.  But this only gets the search engine spider to a site’s front door.  To get to inside pages within the site requires the robot to either: 2) follow links found internally on the site, or 2) follow links appearing on an external site (i.e., another website).  Unless a website consists of only one or a few pages it is unlikely that all pages can be found through external links.  Rather, search engine robots must rely on the site itself for guidance in locating content found inside the site.  This means a site must contain an internal linking system to guide a search engine as it indexes the site.

To insure search engine robots can find webpages through internal links, marketers should consider the following issues:

We should note that some of the material discussed below will require adjustments to a website’s operational side (e.g., adjustments on the web server).  Marketers who are not familiar with technical aspects of operating a website are encouraged to discuss these issues with your technical contacts.

Building a Menu System

A website should be designed in such a way that makes it easy for both users and search engine robots to move around the site and allow access to all material contained within the site.  In most cases navigating a site is handled via a menu system that includes links to important internal content areas.  Some sites, particularly sites with a relatively small number of pages, provide access to nearly all internal pages through a single main menu.  Alternatively, sites with many content areas often follow a hierarchical or drill-down design where there is a main menu containing links to important areas and then once in an area users are presented with a menu of links to further sub-areas.  In some cases sub-areas may contain even more menus.

When designing a menu system, marketers should take the following into consideration:

  • Design for Users But Within SE Marketing Parameters – Menu systems should first and foremost, be built for site visitors and be both intuitive (contains what a visitor expects it to contain and allows visitors to get where they want to go) and consistent (follows similar pattern and design from one page to another).  However, as we will soon discuss, marketers should be cognizant of the potential obstacles menu designs present to search engines crawlers.  So while a menu should be built for the website’s targeted audience it should be done with an understanding of how search engines see the menu.
  • Text Menus May Be Better Than Dynamic Menus – A common method used by websites to expose users to many links within a menu is to use dynamically scripted menus.  Many sites use menus built using JavaScript, a special Internet scripting language that allows menus to be presented in many interesting ways including expanding when a user clicks or runs their mouse over a particular menu item.  Unfortunately, JavaScript can pose a problem for search robots which may have difficulty following the links.  As Google notes in its recommendations for Webmasters,  website operators should be careful when using JavaScript and they recommend that every important content item should also be linked with at least one text link.  A text link is a traditional HMTL link just as we used above to link to Google’s site.  So while the use of dynamic menus can enhance user’s experience with a site, it is important to offer a secondary method for accessing pages via text links (see Sitemap below).
  • Text Menus May Be Better Than Image Menus – One additional bit of caution deals with menus created using images.  As we discussed in previous tutorials, search engines are quite adept at recognizing content produced as text but often fall short at recognizing images.  Many sites use a menu system built on linked images and not linked text.  For instance, a menu may show an image containing the words “Our Products” that when clicked on takes a user to the marketer’s product page.  Yet search engines do not view this as text.  We will discuss the importance of this in greater detail in a later tutorial, but for now understand that search engines not only follow links but they also attempt to gain understanding about the link.  In this example, a search engine attempts to understand what the link is pointing to (a product page).  While this is easily done with text links it is more difficult with image links.  If image links need to be used, the marketer should understand the importance of the ALT tag that is associated with the image link.  Better yet, create menus that are text menus but use images as background and not as links.

Creating a Sitemap

While a good menu structure is important to aiding the crawling activity of search robots, for most medium-to-large sites, menus alone are often not sufficient to house links to all content contained in a site.  This is particularly the case where a site has a large number of areas that are not easily pointed to from a main menu.  Additionally, as we noted, menus delivered dynamically or as an image may not be easily understood by search engines.

For these situations it may be helpful to include a sitemap.  A sitemap offers a single page guide to site content and is presented in the form of text links.  A sitemap can help search engines locate hard to find pages.  In fact, many search engines now strongly encourage sitemaps as a way to speed up the process of locating site content.

Using Page Redirects

There are times a marketer must change the file location of a webpage.  This can occur for several reasons including:

  • Domain Name Change – The marketer’s website domain has been changed often as the result of a corporate reorganization, such as a merger between two companies or the consolidation of several domains under a single domain.
  • URL Renaming – The marketer may decide to rename a file in order to gain search engine advantages (to be discuss in a later tutorial).
  • Page Replacement – The marketer has removed a webpage and would now like visitors to see something different (e.g., old product page to new product page).

In instances of URL or file name changes, or page removals, where the material is still to be viewed (i.e., not deleted), web marketers must direct search engines to the new location otherwise risk not having the webpage found by search engine robots.  Depending on the server platform on which the website is hosted, the process requires manual entry to instruct the web server to direct request to the new location.  For instance, for web servers operating in the Apache web server environment, instructions to automatically redirect a URL request is handled in a file called “.htaccess”.  For users and search engines, the result of the redirect is transparent.

Managing Broken Links

While web marketers should do their best to direct users from old pages to new, invariably some links will lead users and search engines to a dead end.  On the web this means the dreaded “page not found” screen will appear in the web browser when a web server is unable to locate the requested page.  Marketers should understand that a “page not found” message (technically called a 404 error) is not always due to website mistakes.  This error is triggered any time a web address cannot be located on a server.  So any broken link pointing to the marketer’s website, whether on the marketer’s site or on an external site, will produce this error if the URL is not reachable.  For example, if a major news website posts a favorable article about a marketer’s product but mistakenly lists a wrong web address the 404 server error will appear when readers attempt to click the link from the news website.  Under this situation, search engines crawling the news site may follow the link to the marketer’s site but will quickly leave the site when it encounters the “page not found” error, thus preventing the indexing of the page at least until the robot returns to the marketer’s website which may be some time later.

To overcome the “page not found” dead end, marketers are wise to produce a customized 404 error page.  This page should give the appearance of a regular webpage found on the site and should contain menus so that if a person or search engine robot  reaches this page they are aware that a real site does exist.  This approach will also allow a search robot to index other parts of the site by following the links.

Restricting Crawling Activity

Just as marketers can guide search engine robots to locations they want indexed, marketers can also prevent robots from gathering information from certain areas of a site.  There are two main reasons to control the files a search engine will access.  First, some areas of a website may contain information that a marketer would not like to be made public.  For instance, the company may be developing new information on a product but is not yet ready to make it publicly available. 

A second key reason to restrict robot access is to reduce the amount of traffic on a website’s server.  Too much traffic can place strain on the server resulting in slower loading of all webpages.

To restrict robot traffic, websites follow something called the robot exclusion protocol.  This involves the placement on the site of a file named “robot.txt” that contains instructions to let search engine robots know what is and is not accessible.  For anyone interested in further information on how to write instructions for the robot.txt file this site has good information.

SEM: How to Find Keywords

In this part of our discussion of Search Engine Marketing we explore ways to locate the right keywords to attract search traffic including the use of several online tools and other useful techniques.

In the first two sections, we stressed that using keywords within a webpage title, page headings and page content are important for gaining traffic referrals through search engines.  Yet for many marketers unfamiliar with building strong content-rich websites the obvious question arises: How do I know which are the right keywords?  In this part of our on-going tutorial we address this question and offer several suggestions for finding the right keywords.

We stress that our goal is not to suggest tricks to attract search engines to a website but to offer sound, basic advice that makes a site more open to search engine evaluation.

We break down the ways of finding keywords into te following seven categories:

Generally Accepted Word Usage

Keyword development begins with an understanding of words and phrases most common to the marketer’s industry.  Most marketers should have no problem developing a list a terms, however, there are bound to be terms that do not easily come to mind.  A good tactic to dredge up words is to peruse the past twelve months of industry trade magazines (either print or online).  Look for words and phrases mentioned in several different issues of the same magazine and across different magazines since the more terms are repeated the more likely they are important terms.  Pay particularly close attention to terms found in the title of articles since titles are often written to attract attention and may include the most common form of the term. 

Once these words are discovered make sure the words are not strictly “supplier” jargon but are words used by customers.  A frequent mistake websites make is to focus on terms used internally by the marketing organization, rather than on words used by customers and media to refer to the same thing.  One method of checking this is to visit websites customers use to learn about industry products (e.g., consumer magazine site, consumer forums) and use the site’s search feature to enter search terms discovered in the trade publications.  You may find reference to the term within the context of “also called” or “sometimes referred to as” along with the terminology more common to the market.  If the site does not have a search feature or the search feature is not very robust, use the Advanced Search feature on one of the major search engines then limit your search to the web address of the consumer site.  A search engine’s strong indexing capability should locate most information on the site assuming it freely available when originally posted to the site.

Search Engine Supplied Keyword Suggestion Tools

Possibly the most cost effective method for determining a large number of keywords is to use tools offered by search engines within their advertising support toolbox.  The three major search engines in the U.S.  – Google, MSN and Yahoo – provide outstanding keyword suggestion features.  The search engines do this in support of their advertising programs, which allow marketers to place keyword generated ads (i.e., ads based on a user’s search string) on the search engine as well as on other websites within the search engine’s advertising network.  With the exception of Yahoo, access to these tools requires registration, which may include a small one time fee. 

The keyword suggestions offered through these site are based on searches performed through the main search engine, its international versions, and through searches performed through a search engine’s network of site.  These networks are made up of websites that include the search engine functionality on their site.  For example, Google supplies search functionality to America Online and Comcast and thousands of smaller sites while Yahoo supports search for CNN and USA Today.  Whether a site’s search is powered by a major search engine can be determined by examining the area around the search box since it generally contains information saying the site’s search function is “Powered by” or “Enhanced by” one of the top search engines.

The keyword suggestion tools offered by the leading search engines include:

  • Google – With Google’s advertising management program, called AdWords, users will find several tools for selecting the right keywords.  The basic option provides a list of related words to a user entered word.  Another option gives a listing of synonyms for an entered keyword.  Another option allows a user to enter a website address and Google will then perform an analysis and return appropriate keywords for the site.  And the URL to be evaluated is not limited to the marketer’s URL.  Entering the URL of a competitor’s site will also produce a list of keywords.  Finally, the Google’s keyword tool offers a glimpse of a keyword’s search volume, thus indicating which words are currently more important.
  • MSN – Microsoft’s MSN search engine’s advertising tool is a recent addition to search engine advertising.  In the past, all ads appearing on MSN were placed on the site through arrangements with other advertising providers including Google.  But in 2005 MSN began testing its own advertising program called AdCenter.  While still in its infancy, the keyword tools offered by MSN appear to be useful and worth a try.
  • Yahoo – Yahoo Keyword Selector tool (sometimes referred by its former name Overture) is the only one that can be used without a registration.  Unlike other tools, the Yahoo tool does not offer suggestions for related keywords.  The best feature of the Keyword Selector tool is the reported number of times the keywords was used in a search query within the Yahoo network of sites.  Generally this information is shown for the last full month.

For-Fee Services

While most of the search engine tools we’ve discussed are either free or available at low cost, there are other options for finding keywords that require payment.  While these may be costly, the website marketer should at least spend time evaluating them since these are popular among many search engine optimization professionals.

  • Subscription Services – Several Internet sites offer keyword help by analyzing searches in ways that differ from the methods used by the top three search engine tools.  The leader among these services is Wordtracker which compiles information from searches done through several lower level search engines.  Along the same lines as Wordtracker is KeywordDiscovery which claims to gather information from a large number of search engines, though most of what they do is based on deals they have made with Internet Service Provides, who are able to record search queries in an indirect way and then share this information with KeywordDiscovery.
  • Research Reports – At a cost that far exceeds the Subscription Services are the reports offered by Internet research companies.  These reports are not cheap and probably only of interest to the well-funded website marketer.  The leading option is Hitwise, which produces monthly reports by industry including analysis of search terms.  Like KeywordDiscovery, Hitwise gathers its data from relationships with Internet Service Providers but also utilizes software installed on many websites and computers.

Free Web Tools

There are many free tools on the Internet that offer help with keyword search.  Here are a few:

  • Google Suggest – In addition to the keyword tools offered in its advertising area, Google offers an interesting feature called Google Suggest, which offers suggested alternative words to what a users enters in the search box.  The fun part of this feature is that it works as the user types.
  • DigitalPoint – This site is mostly oriented to techie-types, but it does offer a number of tools that marketers will find interesting.  For keywords there is a nice tool that combines Yahoo’s Keyword Selector tool and Wordtracker to give keyword suggestions.  The tool not only lists suggested words but goes a step further and indicates per-day search frequency for each suggestion.

Search Engine Clustering Suggestions

So far we’ve discussed how search engines and other tools provide ideas for keywords using backend (i.e., not on search page) tools directed specifically to website marketers.  However, there is an additional front-end feature offered by a couple of search engines that could be helpful to website marketers.  The feature called clustering, is primarily designed to improve the user’s search experience.  Clustering helps users narrow their search by producing groups of similar search words within the same topic as the originally searched keyword.  The sites that do offer it include:

  • Vivisimo – The originator of clustering, this search engine is mostly marketed as a search tool for organizational sites and counts as clients the U.S. Government and several Fortune 500 companies.  Entering a search term will produce a list of Clustered Results presented as categories, which often appear as single words rather than phrases.  However, categories can be drilled down to find more narrow keyword suggestions including phrases.
  • Microsoft SCR – Among the major search engines only Microsoft has the clustering feature, and they only have it available in testing mode on an obscure website called SCR.  The SCR stands for Search Result Clustering and this site is a development site from Microsoft’s Asia research group.  While the look is not as clean as Vivisimo, SCR displays similar results, though SCR is more likely to produce keyword phrases on the initial display.

Trend Monitoring

Newly coined words and phrases are entering the lexicon nearly everyday.  While many of these never gain acceptance, a large number will eventually become widely used.  Some may even become used within an industry or culture as the key term to describe the industry (e.g., podcasting, viral marketing, crackberry). 

Staying on top of trends can give a website clues to keywords to include within their site.  Here are a few sites that may help:

  • Trendwatching – Interesting site that offers a monthly newsletter examining a consumer trend that appears to be occurring somewhere in the world. 
  • Wordspy – The creation of new words are often the sign of a trend and this site provides a regular updated list of new words including examples of usage. Site includes a subject index to quickly locate terms.

Analysis of Server Logs and Internal Search Logs

One of the most overlooked methods for determining important keywords is actually provided by website visitors.  Many sites, particularly smaller ones, have little knowledge of how visitors reach their site and probably know even less about what they do when they visit.  Yet each time someone visits a site they leave a trail of information, some of which can be quite useful for determining which keywords are important.  This information is contained within the web server log which records all user’s activity, such as how they arrived at the site, what pages they visited on the site, how long they spent and much more.

Using site logs can be useful in several ways. First, if a visitor is referred to a website by a search engine, server logs will record which search engine sent them AND what keyword search was used to locate the site.  Mining the server logs for this information shows these words.  But this raises the question: If visitors are already getting to the site using the keywords isn’t the site already successfully optimized for those keywords?  Maybe not.  Having visitors directed to a site doesn’t mean the site is doing as well as it can with the keywords.  It is more important to examine the keywords along with a site’s position in the search engine rankings.  To do this, enter the keyword string into the search engine from which traffic was received.  Next check the site’s ranking for that keyword.  If the site appears well down the list (e.g., below 10th place) then clearly the site is not receiving as many referrals as sites that show up on the first page.  This may suggest to the marketer that some adjustment to the site could improve their rankings. 

Second, an examination of the web server logs will tell the marketer which keywords are NOT helping drive traffic to the site.  If the logs report low or no referrals for keywords that the marketer expects to do well, then the marketer should be alarmed and needs to take steps to find out why the situation exists.

Third, there is another type of log that can also provide a great deal of information.  This log may exist on websites containing their own internal search function.  Sites that have programs to operate their own search generally also have the option of retaining all entries that were entered in the search box.  From this information a marketer can glean what search terms are entered within the site.  This may offer some guidance on terms that are of interest to website visitors but which the marketer was not aware.

SEM: Page Content

In this part of our on-going series on the Fundamentals of Search Engine Marketing we cover text content, page heading, and keyword usage and see why these are important when developing a website that is receptive to search engine indexing.

As we saw in the first article in our Fundamentals of Search Engine Marketing series, page title is the most important characteristic search engines assess in order to determine whether a web page is relevant to a user’s keyword search.  However, page title is only one of many, many characteristics search engines utilize within their processing algorithm to determine sites that are most relevant to a search query.

In this part of our series we continue to look at basic website design criteria by examining three additional page characteristics that may make a web page more acceptable to search engine activity.  Our focus in this part of the series is on the issues related to the content that appears within a site.

Importance of Text Content

By now most people have heard the oft-repeated mantra that “content is king” to building a successful web presence.  But this raises an important question: What qualifies as content?  Clearly content is anything that is of interest to site visitors including downloadable music, video clips, electronic books, online games, etc.  However for websites looking to take advantage of free traffic generated through search engines, the mantra should be refined to state that “plain-old text content is king”.  While search engines have added many new features and capabilities for searching different types of content (e.g., searching video files, audio files, pdf files, etc.), web searching is still dominated by the basic text search, and more importantly, results of a search are still dominated by text content that matches the user’s search query.  One of the main reasons for this is that search engine robots that crawl the Internet locating information do so by examining the underlying code of a site.  Presently, these crawlers perform much better when the code is associated with plain text compared to other forms of content. 

For website marketers this means sites should still be principally text-based if search engines are to easily understand what the site is about.  Site’s should keep non-text content, such as multimedia (e.g., Flash), to a minimum so as not to be the major content portion of the site or, if it is the major content item, non-text content should be supported with plain text material.

Additionally, important wording should use text and not graphics.  For instance, look below at our own logo:


Like most site logos this appears to be text but in actuality it is a graphic.  While stylistically a text in graphics may appear attractive to a human visitor, search engines are generally unable to read text contained within a graphic.  Until search engines improve their ability to interpret graphics it is generally good site optimization practice to use text over graphics for important words, especially if these words are likely to be used within a user’s keyword search.  If graphics must be used it is recommended that a special ALT tag be included within the coding of the site.  The ALT tag essentially allows the website marketer to describe the graphic using text.  While useful it still is not a replacement for plain text content.

Page Heading

While the page title is the single, most important characteristic to consider when designing web pages, it has one major disadvantage: it does not appear on the web page.  Rather the title appears in the title bar at the very top of the browser.  Because of its location and because few people actually look at the title bar, it is not very useful as a way to introduce visitors to the topic of a web page.  Also, as we noted when we discussed Page Title, while the wording in the page title should make some sense when read, its main purpose is to include the important keywords that relate to a page and, consequently, should not be held to high grammatical or sentence structure standard in same the way information would be held if it appears on a page.

A better option for letting visitors know the topic of a particular web page is to use a descriptive summary in the form of a page heading that is strategically placed on the page and preferably above the page content.  But in addition to helping visitors know what the page is about, the page heading serves as a key consideration for search engines as they index websites. 

A good heading should clearly describe the content of the page, such as describing a website area (e.g., Company History, Corporate Press Releases, etc.), or reflects the title of a content item (e.g., name of article).  When developing headings, it is imperative for the heading to capture the keywords that users are most likely to enter to reach the page and, in this way, it should follow what is in the page title.  (Actually, in practice, this should be the other way around as the page title often is written after the page heading is created).  In this way the page heading serves as additional reinforcement to search engines that a page really is about what the page title says it is about.

Advice for Constructing Page Heading

There are four additional considerations when constructing page headings:

  • Use Text Headings – Many site marketers believe they have headings on their page, and by looking at the page it does appear to be the case since words appear within a graphic.  But, as we noted earlier, inserted text in a graphic will not be read by search engines and, consequently, while it projects as a heading to site visitors who clearly see it as text, it does not project well to search engines that index the site.
  • Use of HTML Heading Tags – HTML, the underlying code of the web, has special markers for identifying important text through so-called heading tags.  These tags, indicated with HTML coding such as H1, H2, H3, etc., are also recognized by search engines as carrying greater weight than other text within a web page.  It is good search engine marketing practice to surround the main page heading in the H1 tag and use other tags for sub-headings (see next bullet).  Additionally, it is wise to limit the H1 tag to a single heading per page and when possible place the H1 heading above the content that it describes.  Normally this means placing the H1 tag so it appears near the top of the page.
  • Use Sub-Headings – Pages often are written to address sub-topics within an overall topic.  In cases where a page is of a length that is realistically represented on a single page (more on longer pages in next bullet) but where topics can be separated out, the use of sub-headings is recommended.  Additionally, sub-headings should be enclosed in higher number heading tags (e.g., H2, H3).  Unlike the H1 tag that should only be used once per page, higher number tags can be used more frequently on a single page.  However, the use of these tags should not be overdone.  It is very likely that heading tags used too frequently may signal to a search engine that a site is attempting to trick the search engine into believing it is more important than it really is (i.e., spamming the search engine).  The best rule-of-thumb is to use heading tags for real headings, that is, text that is descriptive or clearly separate from other text.
  • Break Headings Into Multiple Pages – Since the H1 tag represents important content to a search engine, it is often beneficial to divide slightly different content into multiple pages each with its own H1 header.  For instance, if a product serves more than one market then headings on different pages may include: “Our Custom Products for Hospitals”, “Our Custom Products for Colleges and Universities”, “Our Custom Products for Local Governments”, etc.

Keyword Usage in Content

While the use of keywords in the page title and page heading are critical, keyword usage should not stop there.  Keywords should be included as part of the regular content found on a page since it helps to reinforce to search engines that a site truly is relevant to the terms found in the page title and page headings.  Anyone tasked with content writing or editing responsibilities must understand which keywords are valuable and work to include these in the main content. 

However, using keywords in content can get tricky from a search engine’s perspective.  Many experts in search engine marketing believe search engines follow certain “keyword sensitive” rules when determining the relevancy of a page to a keyword search.  Keyword sensitivity may be affected by one or all of the factors listed below. 

  • Frequency – Refers to how often keywords appear within a web page.  In general, a major keyword should appear a few times within the course of moderate size content (i.e., 500 words or more) but should not appear an overwhelming number of times (see Density below).  We distinguish major keywords as those that are likely, within a natural language approach, to be understood to be repeatable without being considered unusual.  Some words, especially those that are infrequently searched may not seem right within natural language to be repeated.  In this case, repeating these too often may trigger a red flag to the search engine which could result in a site being penalized with lower search engine rankings.
  • Density – Refers to the percentage a particular keyword represents out of all words found on a page.  That is, keyword frequency divided by the total words.  Once again search engines may penalize a page if the percentage is considered too high.
  • Proximity – Refers to how close a keyword is to another keyword that appears in a multi-word search.  For instance, a cleaning services company located in Orlando may see better search ranking results when a user enters the keywords Orlando office building cleaning services if the content on the cleaning service’s web page reads “we are an Orlando area cleaning services company for commercial offices and buildings” than if the content read “we provide cleaning services for many types of buildings including commercial buildings such as offices, factories and others in and around the Orlando area.”
  • Location – Refers to the location a keyword appears on a page, such as how close to the top of page, close to a heading, within the first paragraph, etc.  In general, the earlier the keyword is mentioned in the content the more relevant the content may be to a search that uses the keyword.
  • Consistency – Refers to whether words on a page actually make sense within the scope of what the page is purported to be about.  Thus, a search engine would consider it suspicious if a page title and headings suggest the site is about boating but the wording within the main content of the page is about vitamin products.

Best Approach for Using Keywords

So with all this what is the best approach?  First of all, caution should be exercised in reading too much into these factors since their actual effect is often only an “educated guess” of supposed search engine marketing experts.  This is due to the highly secretive nature of search engine indexing and their understandable reluctance to disclose the workings of the ranking algorithms.  For example, while many search engine marketing experts agree that high keyword Density is a potential problem, no one knows for sure what this level is and the search engines are providing little guidance on the issue.

Also, not much of what is discussed matters if few sites actually use a particular keyword.  For instance, a branded name may only appear on the company website in which case if this is entered into a search engine it may only produce results from the company’s site no matter what kind of search engine optimization takes place.

But for any website operator looking to be recognized by search engines the best advice is to: 1) write content that is first and foremost of interest to your site visitors, and 2) make sure to strategically include keywords that are directly related to the content and have a high likelihood of being entered as search terms.  But when it comes to using keywords the best rule-of-thumb is to error on the side of caution.  Usage should not overwhelm the reader because if it does it is almost guaranteed to raise a red flag with search engines.

SEM: Importance of Page Title

Nearly all experts in search engine marketing agree the most important element of an individual webpage is the title given to the page.  This is the information that appears at the very top of the browser window when a webpage loads and within the underlying code (e.g., HTML code) that is enclosed in the element. 

From a search engine’s point of view, page title is the first indication of the contents of the page.  While many factors on and off the page affect how a search engine interprets the page, the page title is the leading indicator.  Additionally, page title is the key information returned when search engines list results to a keyword search.

With this in mind we consider the following key concerns related to page title:

Keywords: Place Keywords in Title

The title should contain keywords relating to the page and, in particular, to keywords most likely entered in search engines by those who are potential visitors to a site.  In most cases keywords should be in the form of a string of words or a phrase rather than single words.  Keyword phrases hold a higher potential for bringing qualified customers to a site since the more detailed a person is in entering a phrase, the more likely they are to be truly interested in the topic of their search.  For example, a search engine user entering the keywords “techniques for branding consumer products” is likely to be more interested in that topic than someone who simply enters the search term “branding.”

When it comes to placing keywords in the page title, a major mistake of many websites is to use terminology that may be familiar to employees of the website (e.g., industry lingo, acronyms) but do not match the search terms users enter when searching.  An example that we learned at proves this point.  When we first launched the site in 1998 we referred to a major Topic Area as “Marketing Research” mainly because this is how most marketing textbooks refer to research in marketing.  However, around 1999, Overture (now owned by Yahoo) released a keyword selector tool that allowed website marketers to see how many times keywords were entered into their search engine during a previous month.  We were surprised to see searchers are much more likely to use the phrase “market research” ” as part of the search string.  In fact, we tried this comparison again recently and found that search engine users are 12 times more likely to enter a keyword search using the phrase “market research” than the phrase “marketing research.”  The point here is that while it may be fine to include industry terminology, the page title should also reflect terms used by the average customer.

One last point, while placing keywords in the page title is necessary, it is wise not to go overboard by using the same keyword phrase multiple times in the same page title.  Instead, include in the page title two or three different phrases that describe what the page is about.  Otherwise search engines may believe the site is attempting to trick the search engine (i.e., spam) which may result in the website being penalized by search engines.

Length: Understand the Length of Page Titles

Many sites appear to believe it is important for the name of the site (or company name) to appear on every page of their website, no matter how long the name may be.  While communication theory would suggest this is a good way for people to learn who you are, since they are repeatedly being exposed to the name, from a search engine perspective using the site name in all page titles is squandering potential opportunity.  The opportunity lies in the search traffic that may not come to your page because the keywords, while appearing in the title, are past the point at which some search engines will index.  To see this, do a Google search and examine the results.  If a site’s title is too long Google will display repeated dots (…) at the end.  Without the keywords visible in the viewable title, search engines may not associate the keywords with the site and if the page is listed users will not see the keywords highlighted in response to their search.  For example, if a site’s title includes the phrase “Techniques for Sales Lead Generation” but only “Techniques for Sales” is viewable the site may not benefit from searches that are directly related to what the page is about, namely techniques for generating sales leads.

While a long page title results in lost opportunity because keywords may not be viewable, opportunity can also be lost when a page title does not take advantage of the full title that search engines will recognize.  To take full advantage of what search engines will see, the marketer should spend time to insure the page title utilizes the full space available and contains important keywords that best reflect the page and user search strings.

This discussion raises an obvious question: How long should the title be?  Well, it depends on the search engine.  With Google the current total characters it will show in the title of a site is about 67 which includes blank spaces.  For titles extending past 67 characters Google will cut the remainder and, in fact, will cut the title at the end of the last full word.  MSN search has a similar size limits, though it gives a few more characters while Yahoo appears to be the most generous by displaying over 100 characters.  The basic rule to follow for a site trying to appeal to all search engines is to make sure the most important keywords are within the first 67 characters.

This brings us back to the point regarding companies placing their name on all page titles.  How much of the valued page title space is the company name consuming?  The longer it is the less opportunity exists for placing important keywords in the page title.  For situations where the company name is long but company execs want the company name on all pages, the website marketer should consider: 1) shortening or abbreviating the name on inside pages, or 2) adding the name to the end of the page title instead of the beginning as in “Keyword Phrase, Keyword Phrase : Company Name”.

By the way, an easy way to figure out the length of a page title is to copy and paste it into a word processor that contains a word count feature.  In fact, a word processor is probably the best place to create page titles since it also provides spell checking ability.

Phrasing: Title Reflects Page Content but Ease Up on Grammar Rules

As we already discussed the page title should give search engines and, of course, site visitors a good idea of the content of the page and be built with a strong leaning toward the most likely search keywords.  (We will see in a later article that the keywords should also appear in the content of the webpage.)  With a limited number of characters available to describe the page, grammar and sentence structure are much less important when it comes to the page title compared to its importance within the content of the page. 

Writing page titles with such grammar-correct words as “the”, “and”, “is” etc., may take up valuable character space that could be filled with more valuable keywords.  In addition using separator such as a dash ” – ” may also be a space waster since it really is taking up three character spaces with blank spaces required on either side, compared to a comma “, ” which takes up only two spaces.  However, with all this said I still argue that titles should “read right” and not be just a collection of keywords.

Individuality: Different Name for Each Page

Many sites fail to recognize that search engines do not always direct customers through a website’s front door (i.e., main page).  Instead visitors may enter the site through a page the search engine believes is the best match for someone’s keyword search.  Consequently, nearly all pages of a website should be considered its own unique place on the web.  This mean basic webpage design characteristics, including what we discussed regarding page title, should apply to every important page on the site.

Importance of Internet Strategy: Part 3

Takes Prospects Right to the Sale

No other form of communication comes close to turning exposure to promotion into immediate customer action as the Internet, which allows customers to make purchases immediately after experiencing a promotion. Prior to the Internet, the most productive call-to-action was through television informercials that encourage viewers to call toll-free phone numbers. However, moving customers from a non-active state (i.e., watching television) to an active state (i.e., picking up the phone to call the number) is not nearly as effective as getting people to click on an Internet ad while they are actively using the Internet.

Conveys Perception of Being a Full-Service Provider

For distributors and retailers the Internet makes it easy to be a comprehensive supplier. Unlike brick-and-mortar suppliers who are often judged by the inventory that is actually on hand or services provided at a store, e-commerce sites can give the illusion of having depth and breadth of inventory and service offerings. This can be accomplished by placing product and service information on the company’s website but behind the scenes having certain orders fulfilled by outside suppliers via shipping and service agreements. With such arrangements customers may feel they are dealing with providers that offer full-service when in reality a certain percentage of the products and service are obtained from other sources.

Lower Overhead, Lower Costs, Better Service

Internet technologies are replacing more expensive methods for delivering products and services, and for handling customer information needs. Cost savings can certainly be seen with products and services deliverable in digital form (e.g., music, publications, graphic design, etc.) where production and shipping expenses are essentially removed from the cost equation. Cost savings may also be seen in other marketing areas including customer service where the volume of customer phone calls may be reduced as companies provide online access to product information through such services as Knowledge Bases and answers to Frequently Asked Questions. Field salespeople may also see benefits by encouraging prospects to obtain product information online prior to a face-to-face meeting. This may help reduce the time devoted to explaining basic company and product information and leave more time for understanding and offering solutions to customer’s problems. As these examples suggest, the Internet may lower administrative and operational costs while offering greater value to customers.

Create Worldwide Presence

The Internet is a communication and distribution channel that offers global accessibility to a company’s product and service offerings. Through a website a local marketer can quickly become a global marketer and, by doing so, expand their potential target market to many times its current size. Unlike the days before e-commerce when marketing internationally was a time-consuming and expensive undertaking, the uploading of files to establish a website is all that is needed to create a worldwide presence. While establishing a website does not guarantee international sales (there is a lot more marketing work needed for the site to be viable internationally), the Internet provides a gigantic leap into global business compared to pre-Internet days.

Importance of Internet Strategy: Part 1

It may seem surprising but many companies, big and small, have yet to develop a rational Internet marketing strategy. Considering the Internet has now been used effectively by marketers since 1994, any organization without a strategy to utilize the Internet is not not fully aware of how important it has become. The Internet’s importance for marketing includes:

Go-To Place for Information

Possibly the most important reason why companies need to have an active Internet marketing strategy is because of the transformation that has occurred in how customers seek information. While customers still visit stores, talk to sales representatives, look through magazines, and talk to friends to gather product information, an ever-increasing number of customers turn to the Internet as their primary knowledge source. In particular, they use search engines as their principle portal of knowledge as search sites have become the leading destination sites for most Internet users. Marketers must recognize that the Internet is where customers are heading and, if the marketer wants to stay visible and viable, they must follow.

What Customers Expect

The Internet is not only becoming the resource of choice for finding information, in the next few years it is also likely to be the expected location where customers can learn about products and make purchases. This is especially the case for customers below the age of 25. In many countries, nearly all children and young adults have been raised knowing how to use the Internet. Once members of this group dominate home and business purchases they will clearly expect companies to have a strong Internet presence.

Captures a Wide Range of Customer Information

As a data collection tool the Internet is unmatched when it comes to providing information on customer activity. Each time a visitor accesses a website they leave an information trail that includes how they got to the site, how they navigated through the site, what they clicked on, what was purchased, and loads of other information. When matched to a method for customer identification, such as login information, the marketer has the ability to track a customer’s activity over repeated visits to the site. Knowing a customer’s behavior and preferences opens up tremendous opportunities to cater to customer’s needs and, if done correctly, the customer will respond with a long-lasting loyalty.

Importance of Internet Strategy: Part 2

Extreme Target Marketing

The most efficient way for marketers to spend money is to direct spending to those who are most likely to be interested in what the marketer is offering. Unfortunately, efforts to target only customers who have the highest probably of buying has not been easy. For instance, consider how much money is wasted on television advertisements to people who probably will not buy. Yet the Internet’s unrivaled ability to identify and track customers has greatly improved marketer’s ability to target customers who exhibit the highest potential for purchasing products.

Stimulate Impulse Purchases

Whether customers like it or not, the Internet is proving to be the ultimate venue for inducing impulse purchases. Much of this can be attributed to marketers taking advantage of improvements in technologies that: 1) allow a website to offer product suggestions based on customer’s online buying behavior, and 2) streamline the online purchasing process. But online impulse purchasing also takes advantage of the “purchase now, pay later” attitude common in an overspending credit card society. How this plays out over time as many customers become overwhelmed with debt will need to be watched and could impact online marketer’s activities.

Customized Product and Service Offerings

Companies know they can develop loyal customers when product and service offerings are designed to satisfy individual needs. This has led many online marketers to implement a mass customization strategy offering customers online options for configuring products or services. The interactive nature of the Internet makes “build-your-own” a relatively easy to implement purchasing option. An empowered customer base that feels a company will deliver exactly what they want is primed to remain loyal for long period of time.

Search Engine Marketing

For millions of people around the world, search engines have become the central doorway to all the Internet has to offer. In this role search engines have become extremely influential and powerful in their ability to funnel traffic to websites. However, many marketers remain unenlightened with regard to the power search engines possess in generating qualified customer traffic.

It is safe to conclude that many marketers have yet to grasp one highly important component of Internet marketing: search engines. While most marketers know search engines can help visitors find “stuff” on the web, many appear to be unaware of the overall importance search engines play today in customer lead generation let alone recognize how search can dramatically effect how marketing will be done in the future. Quite simply search engine sites are the number one reason people use the Web and are second only to email as the most important use of the Internet (technically email is a different protocol than the Web). And as we pointed out in our discussion of the Importance of Internet Strategy, search engines are fast becoming the first stop or go-to place for information of all kinds, including where customers go to learn about products and services.

Unfortunately many marketers, especially those who have traditionally operated offline, fail to make the connection between search engines as a tool for gaining information and search engines as a means for generating customers. Their lack of understanding can clearly be seen by examining company websites, which often fail to include fundamental design features minimally necessary for search engines to understand what the site is about. And if search engines struggle to recognize the substance of a site, marketers will not get close to experiencing the full potential for generating traffic through search engines.

With this in mind we cover the importance issues in Search Engine Marketing. This information is designed to offer marketers basic strategies and tactics to increase website traffic via search engines.The focus here is on issues necessary for search engines to grasp what a website has to offer. What we will discuss are not tricks; rather these are simple straightforward logical design characteristics that simply make a site suitable for evaluation when search engines visits (i.e., crawls) as part of its indexing routine.