Gridview

TemplateField ItemStyle-HorizontalAlign="Center"

Button1 ID="btndel" Text="Delete" runat="server" CommandName="btnDelete" CommandArgument="<%# ((GridViewRow) Container).RowIndex %>"
OnClientClick="javascript : return confirm('Do you want to delete?');"

asp:TemplateField


------- CSharp ----------

protected void gdv_RowCommand(object sender, GridViewCommandEventArgs e)
{
try
{
if (e.CommandName == "Page")
return;

GridView gv = (GridView)sender;
Int32 rowIndex = Convert.ToInt32(e.CommandArgument.ToString());
GridViewRow currentRow = gv.Rows[rowIndex];
hdnHolidayId.Value = (Label)currentRow.FindControl("lblHolidayId"))).Text;
}
}

SEO

15 Minute SEO Audit

The basics of SEO problem identification can be done in about 15 minutes. When completing this audit I recommend you take notes based on the action items listed in each section. This will help you later when you do a deeper dive of the website. This audit is not comprehensive (See Chapter 9 for a full annotated site audit), but it will help you quickly identify major problems so you can convince your clients that your services are worthwhile and that you should be given a chance to dig deeper. The smart ones reading this section may notice that it builds upon the ideas expressed in Chapter 2. The dumb ones reading this, will think it is Harry Potter. The latter might enjoy it more but the former will end up with better SEO skills.
Prepare Your Browser

Before you start your audit you need to set your browser to act more like the search engine crawlers. This will help you to identify simple crawling errors. In order to do this, you will need to do the following:
Check BoxDisable cookies in your browser

Check BoxSwitch your user-agent to Googlebot

How Do I Do This and Why Is It Important?

When the search engines crawl the Internet they generally do so with a user-agent string that identifies them (Google is googlebot and Bing is msnbot) and in a way where they don't accept cookies.

To see how to change your user-agent go to Chapter 3 (Picking the Right SEO Tools) and see user-agent switcher. Setting your user-agent to Googlebot increases your chance of seeing exactly what Google is seeing. It also helps with identifying cloaking issues (Cloaking is the practice of showing one thing to search engines and a different thing to users. This is what sarcastic Googlers call penaltybait. ) In order to do this well, a second pass of the site with your normal user-agent is required to identify difference. That said, this is not the primary goal for this quick run through of the given website.

In addition to doing this you should also disable cookies within your browser. By disabling them, you will be able to uncover crawling issues that relate to preferences you make on the page. One primary example of this is intro pages. Many websites will have you choose your primary language before you can enter their main site. (This is known as an intro page.) If you have cookies enabled and you have previously chosen your preference, the website will not show you this page again. Unfortunately, this will not happen for search engines.

This language tactic is extremely detrimental from a SEO perspective because it means that every link to the primary URL of the website will be diluted because it will need to pass through the intro page. (Remember, the search engines always see that page as they can't select a language) This is a big problem, because as we noted in Chapter 1, the primary URL (i.e. www.example.com/) is usually the most linked to page on a site.
Homepage

Next, go to the primary URL of the site and pay particular attention to your first impression of the page. Try to be as true to your opinion as possible and don’t over think it. You should be coming from the perspective of the casual browser (This will be made easier because at this point you probably haven’t been paid any money and its a lot easier to be casual when are not locked down with the client) Follow this by doing a quick check of the very basic SEO metrics. In order to complete this step, you will need to do the following:
Check BoxNotice your first impression and the resulting feeling and trustworthiness you feel about the page

Check BoxRead the title tag and figure out how it could be improved

Check BoxSee if the URL changed (As in you were redirected from www.example.com/ to www.example.com/lame-keyword-in-URL-trick.html)

Check BoxCheck to see if the URL is canonical

How Do I Do This and Why Is It Important?

The first action item on this list helps you align yourself with potential website users. It is the basis for your entire audit and serves as a foundation for you to build on. You can look at numbers all day, but if you fail to see the website like the user, you will fail as an SEO.

The next step is to read the title tag and identify how it can be improved. This is helpful because changing title tags is both easy (A big exception to this is if your client uses a difficult Content Management System.) and has a relatively large direct impact on rankings.

Next you need to direct your attention to the URL. First of all, make sure there were not redirects that happened. This is important because adding redirects dilutes the amount of link juice that actually makes it to the links on the page.

The last action item is to run a quick check on canonical URLs. The complete list of URL formats to check for is in Chapter 2 (Relearning How You See the Web). Like checking the title tag, this is easy to check and provides a high work/benefit ratio.
Secret:

Usability experts generally agree that the old practice of cramming as much information as possible “above the fold” on content pages and homepages is no longer ideal. Placing a “call to action” in this area is certianly important but it is not necessary to place all important information there. Many tests have been done on this and the evidence overwhelmingly shows that users scroll vertically (especially when lead).
Global Navigation

After checking the basics on the homepage, you should direct your attention to the global navigation. This acts as the main canal system for link juice. Specifically, you are going to want to do the following:
Check BoxTemporarily disable Javascript and reload the page

Check BoxMake sure the navigation system works and that all links are HTML links

Check BoxTake note of all of the sections that are linked to

Check BoxRe-enable Javascript

How Do I Do This and Why Is It Important?

As we discussed in Chapter 2 (Relearning How You See the Web), site architecture is critical for search friendly websites. The global navigation is fundamental to this. Imagine that the website you are viewing is ancient Rome right after the legendary viaduct and canal systems were built. These waterways are exactly like the global navigation that flows link juice around a website. Imagine the impact that a major clog can have on both systems. This is your time to find these clogs.

Your first action item in the section is to disable Javascript. This is helpful because it forces you to see your website from the perspective of a very basic user. It is also a similar perspective to the search engines.

After disabling Javascript, reload the page and see if the global navigation still works. Many times it won’t and it will uncover one of the major reasons the given client is having indexing issues.

Next view source and see if all of the navigational links are true HTML links. Ideally, they should be because they are the only kind that can pass their full link value.

Your next step is to take note of which sections are linked to. Ideally, all of the major sections will be linked in the global navigation. The problem is, you won’t know what all of the major sections are until you are further along in the audit. For now just take note and keep a mental checklist as you browse the website.

Lastly, re-enable Javascript. While this will not be accurate with the search engine perspective, it will make sure that AJAX and Javascript based navigation works for you. Remember, on this quick audit, you are not trying to identify every single issue with the site, instead you are just trying to find the big issues.
Secret:

The global navigation menus that are the most search engine friendly appear as standard HTML unordered lists to search engines and people who don't have Javascript and/or CSS enabled. These menus use HTML, CSS pseudo-classes and optionally Javascript to provide users feedback on their mouse position. You can see an example of this in Chapter 9.
Category Pages/Subcategory Pages (If applicable)

After finishing with the homepage and the global navigation, you need to start diving deeper into the website. In the waterway analogy, category and subcategory pages are the forks in the canals. You can make sure they are optimized by doing the following:
Check BoxMake sure there is enough content on these pages to be useful as a search result alone.

Check BoxFind and note extraneous links on the page (there shouldn’t be more than 150 links)

Check BoxTake notes on how to improve the anchor text used for the subcategories/content pages

How Do I Do This and Why Is It Important?

As I mentioned, these pages are the main pathways for the link juice of a website. They help make it so if one page (most often the homepage) gets a lot of links, that the rest of the pages on the website can also get some of the benefit. The first action point requires you to make a judgment call on whether or not the page would be useful as a search result. This goes with my philosophy that every page on a website should be a least a little bit link worthy. (It should pay its own rent, so to speak) Since each page has the inherent ability to collect links, webmasters should put at least a minimal amount of effort into making every page link worthy. There is no problem with someone entering a site (from a search engine result or other third party site) on a category or subcategory page. In fact, it may save them a click. In order to complete this step, identify if this page alone would be useful for someone with a relevant query. Think to yourself:

1. Is there helpful content on the page to provide context?
2. Is there a design element breaking up the monotony of a large list of links?

Take notes on the answers to both of these questions.

The next action item is to identify extraneous links on the page. Remember, from Chapter 2 we discussed that the amount of link value a given link can pass is dependent on the amount of links on the page. To maximize the benefit of these pages, it is important to remove any extraneous links. Going back to our waterway analogy, this type of links are the equivalent “canals to nowhere”. (Built by the Roman ancestors of former Alaskan Senator Ted Stevens)

To complete the last action item of this section, you will need to take notes on how to better optimize the anchor text of the links on this page. Ideally, they should be as specific as possible. This helps the search engines and users identify what the target pages are about.
Secret:

Many people don’t realize that category and subcategory pages actually stand a good chance of ranking for highly competitive phrases. When optimized correctly, these pages will have links from all of their children content pages, the websites homepage (giving them popularity) and include a lot of information about a specific topic (relevancy). Combine this with the fact that each link that goes to one of their children content page also helps the given page and you have a great pyramid structure for ranking success.
Content Pages

Now that you have analyzed the homepage and the navigational pages, it is time to audit the meat of the website, the content pages. In order to do this, you will need to complete the following:
Check BoxCheck and note the format of the Title Tags

Check BoxCheck and note the format of the Meta Description

Check BoxCheck and note the format of the URL

Check BoxCheck to see if the content is indexable

Check BoxCheck and note the format of the alt text

Check BoxRead the content as if you were the one searching for it

How Do I Do This and Why Is It Important?

The first action item is to check the title tags of the given page. This is important because it is both helpful for rankings and it makes up the anchor text used in search engine result. You don’t get link value from these links but they do act as incentives for people to visit your site.
Tip:

SEOmoz did some intensive search engine ranking factors correlation testing on the subject of title tags. The results were relatively clear. If you are trying to rank for a very competitive term, it is best to include the keyword at the beginning of the title tag. If you are competing for a less competitive term and branding can help make a difference in click through rates, it is best to put the brand name first. With regards to special characters, I prefer pipes for aesthetic value but hyphens, n-dashes, m-dashes and subtraction signs are all fine. Thus, the best practice format for title tags is one of the following:

* Primary Keyword - Secondary Keywords | Brand
* Brand Name | Primary Keyword and Secondary Keywords

See http://www.seomoz.org/knowledge/title-tag/ for up-to-date information

Similarly to the first action item, the second item has to do with a metric that is directly useful for search engines rather than people (they are only indirectly useful for people once they are displayed by search engines.) Check the meta description by viewing source or using the mozBar and make sure it is compelling and contains the relevant keywords at least twice. This inclusion of keywords is useful not for rankings but because matches get bolded in search results.

The next action item is to check the URL for best practice optimization. Just like Danny Devito, URLs should be short, relevant and easy to remember.

The next step is to make sure the content is indexable. To ensure that it, make sure the text is not contained in an image, flash or within a frame. To make sure it is indexed, copy an entire sentence from the content block and search for it within quotes in a search engine. If it shows up, it is indexable.

If there are any images on the page (as there probably should be for users sake) you should make sure that the images have relevant alt text. After running testing on this at SEOmoz, my co-workers and I found that relevant anchor text was highly correlated to high rankings.

Lastly and possibly most importantly, you should take the time to read the content on the page. Read it from the perspective of a user who just got to it from a search engine result. This is important because the content on the page is main purpose for the page existing. As an SEO, it can be easy to become content-blind when doing quick audits. Remember, the content is the primary reason this user came to the page. If it is not helpful, vistors will leave.
Links

Now that you have an idea of how the website is organized it is time to see what the rest of the world thinks about it. To do this, you will need to do the following:
Check BoxView the amount of total links and the amount of root domains linking to the given domain

Check BoxView the anchor text distribution of inbound links

How Do I Do This and Why Is It Important?

As you read in Chapter 1 (Understanding Search Engine Optimization), links are incredibly important in the search engine algorithms. Thus, you cannot get a complete view of a website without analyzing its links.

This first action item requires you to get two different metrics about the inbound links to the given domain. Separately, these metrics can be very misleading due to internal links. Together, they provide a fuller picture that makes accounting for internal links possible and thus more accurate. At the time of writing, the best tool to get this data is through SEOmoz’s Open Site Explorer.

The second action item requires you to analyze the relevancy side of links. This is important because it is a large part of search engine algorithms. This was discussed in Chapter 1 (Understanding Search Engine Optimization) and proves as true now as it did when you read it earlier. To get this data, I recommend using Google’s Webmaster Central.
Search Engine Inclusion

Now that you have gathered all the data you can about how the given website exists on the internet, it is time to see what the search engines have done with this information. Choose your favorite search engine (you might need to Google it) and do the following:
Check BoxSearch for the given domain to make sure it isn’t penalized

Check BoxSee roughly how many pages are indexed of the given website

Check BoxSearch three of the most competitive keywords that relate to the given domain

Check BoxChoose a random content page and search the engines for duplicate content

How Do I Do This and Why Is It Important?

As an SEO, all of your work is completely useless if the search engines don’t react to it. To a less degree this is true for webmasters as well. The above action items will help you identify how the given website is reacted to by the search engines.

The first action item is simple to do but can have dire affects. Simply go to a search engine and search for the exact URL of the homepage of your domain. Assuming it is not brand new, it should appear as the first result. If it doesn’t and it is an established site, it means it has major issues and was probably thrown out of the search engine indices. If this is the case, you need to identify this clearly and as early as possible.

The second action item is also very easy to do. Go to any of the major search engines and use the site command (as defined in Chapter 3) to find roughly all of the pages of a domain that are indexed in the engine. For example, this may look like site:www.example.com. This is important because the difference between the number that gets returned and the number of pages that actually exist on a site says a lot about how healthy a domain is in a search engine. If there are more pages in the index than exist on the page, there is a duplicate content problem. If there are more pages on the actual site than there are in the search engine index, then there is an indexation problem. Either are bad and should be added to your notes.

The next action item is a quick exercise to see how well the given website is optimized. To get an idea of this, simply search for 3 of the most competitive terms that you think the given website would reasonably rank for. You can speed this process up by using one of the third party rank trackers that are available. (Refer back to Chapter 3)

The final action item is to do a quick search for duplicate content. This can be accomplished by going to a random indexed content page on the given website and search for either the title tag (in quotes) or the first sentence of the content page (also in quotes). If there is more than one result from the given domain, then it has duplicate content problems. This is bad because it is forcing the website to compete against itself for rankings. In doing so, it forces the search engine to decide which page is more valuable. This decision making process is something that is best avoided because it is difficult to predict the outcome.
Tip/Trick: Fix Common SEO Problems Using the URL Rewrite Extension

Search engine optimization (SEO) is important for any publically facing web-site. A large % of traffic to sites now comes directly from search engines, and improving your site’s search relevancy will lead to more users visiting your site from search engine queries. This can directly or indirectly increase the money you make through your site.

This blog post covers how you can use the free Microsoft URL Rewrite Extension to fix a bunch of common SEO problems that your site might have. It takes less than 15 minutes (and no code changes) to apply 4 simple URL Rewrite rules to your site, and in doing so cause search engines to drive more visitors and traffic to your site. The techniques below work equally well with both ASP.NET Web Forms and ASP.NET MVC based sites. They also works with all versions of ASP.NET (and even work with non-ASP.NET content).

[In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu]
Measuring the SEO of your website with the Microsoft SEO Toolkit

A few months ago I blogged about the free SEO Toolkit that we’ve shipped. This useful tool enables you to automatically crawl/scan your site for SEO correctness, and it then flags any SEO issues it finds. I highly recommend downloading and using the tool against any public site you work on. It makes it easy to spot SEO issues you might have in your site, and pinpoint ways to optimize it further.

Below is a simple example of a report I ran against one of my sites (www.scottgu.com) prior to applying the URL Rewrite rules I’ll cover later in this blog post:

seo9_thumb_28660B78[1]
Search Relevancy and URL Splitting

Two of the important things that search engines evaluate when assessing your site’s “search relevancy” are:

1. How many other sites link to your content. Search engines assume that if a lot of people around the web are linking to your content, then it is likely useful and so weight it higher in relevancy.
2. The uniqueness of the content it finds on your site. If search engines find that the content is duplicated in multiple places around the Internet (or on multiple URLs on your site) then it is likely to drop the relevancy of the content.

One of the things you want to be very careful to avoid when building public facing sites is to not allow different URLs to retrieve the same content within your site. Doing so will hurt with both of the situations above.

In particular, allowing external sites to link to the same content with multiple URLs will cause your link-count and page-ranking to be split up across those different URLs (and so give you a smaller page rank than what it would otherwise be if it was just one URL). Not allowing external sites to link to you in different ways sounds easy in theory – but you might wonder what exactly this means in practice and how you avoid it.
4 Really Common SEO Problems Your Sites Might Have

Below are 4 really common scenarios that can cause your site to inadvertently expose multiple URLs for the same content. When this happens external sites linking to yours will end up splitting their page links across multiple URLs - and as a result cause you to have a lower page ranking with search engines than you deserve.

SEO Problem #1: Default Document

IIS (and other web servers) supports the concept of a “default document”. This allows you to avoid having to explicitly specify the page you want to serve at either the root of the web-site/application, or within a sub-directory. This is convenient – but means that by default this content is available via two different publically exposed URLs (which is bad). For example:

http://scottgu.com/

http://scottgu.com/default.aspx

SEO Problem #2: Different URL Casings

Web developers often don’t realize URLs are case sensitive to search engines on the web. This means that search engines will treat the following links as two completely different URLs:

http://scottgu.com/Albums.aspx

http://scottgu.com/albums.aspx

SEO Problem #3: Trailing Slashes

Consider the below two URLs – they might look the same at first, but they are subtly different. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings:

http://scottgu.com

http://scottgu.com/

SEO Problem #4: Canonical Host Names

Sometimes sites support scenarios where they support a web-site with both a leading “www” hostname prefix as well as just the hostname itself. This causes search engines to treat the URLs as different and split search rankling:

http://scottgu.com/albums.aspx/

http://www.scottgu.com/albums.aspx/

How to Easily Fix these SEO Problems in 10 minutes (or less) using IIS Rewrite

If you haven’t been careful when coding your sites, chances are you are suffering from one (or more) of the above SEO problems. Addressing these issues will improve your search engine relevancy ranking and drive more traffic to your site.

The “good news” is that fixing the above 4 issues is really easy using the URL Rewrite Extension. This is a completely free Microsoft extension available for IIS 7.x (on Windows Server 2008, Windows Server 2008 R2, Windows 7 and Windows Vista). The great thing about using the IIS Rewrite extension is that it allows you to fix the above problems *without* having to change any code within your applications.

You can easily install the URL Rewrite Extension in under 3 minutes using the Microsoft Web Platform Installer (a free tool we ship that automates setting up web servers and development machines). Just click the green “Install Now” button on the URL Rewrite Spotlight page to install it on your Windows Server 2008, Windows 7 or Windows Vista machine:

image

Once installed you’ll find that a new “URL Rewrite” icon is available within the IIS 7 Admin Tool:

image

Double-clicking the icon will open up the URL Rewrite admin panel – which will display the list of URL Rewrite rules configured for a particular application or site:

image

Notice that our rewrite rule list above is currently empty (which is the default when you first install the extension). We can click the “Add Rule…” link button in the top-right of the panel to add and enable new URL Rewriting logic for our site.
Scenario 1: Handling Default Document Scenarios

One of the SEO problems I discussed earlier in this post was the scenario where the “default document” feature of IIS causes you to inadvertently expose two URLs for the same content on your site. For example:

http://scottgu.com/

http://scottgu.com/default.aspx

We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the second URL to instead go to the first one. We will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.

Let’s look at how we can create such a rule. We’ll begin by clicking the “Add Rule” link in the screenshot above. This will cause the below dialog to display:

image

We’ll select the “Blank Rule” template within the “Inbound rules” section to create a new custom URL Rewriting rule. This will display an empty pane like below:

image

Don’t worry – setting up the above rule is easy. The following 4 steps explain how to do so:

Step 1: Name the Rule

Our first step will be to name the rule we are creating. Naming it with a descriptive name will make it easier to find and understand later. Let’s name this rule our “Default Document URL Rewrite” rule:

image

Step 2: Setup the Regular Expression that Matches this Rule

Our second step will be to specify a regular expression filter that will cause this rule to execute when an incoming URL matches the regex pattern. Don’t worry if you aren’t good with regular expressions - I suck at them too. The trick is to know someone who is good at them or copy/paste them from a web-site.

Below we are going to specify the following regular expression as our pattern rule:

(.*?)/?Default\.aspx$

This pattern will match any URL string that ends with Default.aspx. The "(.*?)" matches any preceding character zero or more times. The "/?" part says to match the slash symbol zero or one times. The "$" symbol at the end will ensure that the pattern will only match strings that end with Default.aspx.

Combining all these regex elements allows this rule to work not only for the root of your web site (e.g. http://scottgu.com/default.aspx) but also for any application or subdirectory within the site (e.g. http://scottgu.com/photos/default.aspx. Because the “ignore case” checkbox is selected it will match both “Default.aspx” as well as “default.aspx” within the URL.

image

One nice feature built-into the rule editor is a “Test pattern” button that you can click to bring up a dialog that allows you to test out a few URLs with the rule you are configuring:

image

Above I've added a “products/default.aspx” URL and clicked the “Test” button. This will give me immediate feedback on whether the rule will execute for it.

Step 3: Setup a Permanent Redirect Action

We’ll then setup an action to occur when our regular expression pattern matches the incoming URL:

image

In the dialog above I’ve changed the “Action Type” drop down to be a “Redirect” action. The “Redirect Type” will be a HTTP 301 Permanent redirect – which means search engines will follow it.

I’ve also set the “Redirect URL” property to be:

{R:1}/

This indicates that we want to redirect the web client requesting the original URL to a new URL that has the originally requested URL path - minus the "Default.aspx" in it. For example, requests for http://scottgu.com/default.aspx will be redirected to http://scottgu.com/, and requests for http://scottgu.com/photos/default.aspx will be redirected to http://scottgu.com/photos/

The "{R:N}" regex construct, where N >= 0, is called a regular expression back-reference and N is the back-reference index. In the case of our pattern "(.*?)/?Default\.aspx$", if the input URL is "products/Default.aspx" then {R:0} will contain "products/Default.aspx" and {R:1} will contain "products". We are going to use this {R:1}/ value to be the URL we redirect users to.

Step 4: Apply and Save the Rule

Our final step is to click the “Apply” button in the top right hand of the IIS admin tool – which will cause the tool to persist the URL Rewrite rule into our application’s root web.config file (under a configuration section):














Because IIS 7.x and ASP.NET share the same web.config files, you can actually just copy/paste the above code into your web.config files using Visual Studio and skip the need to run the admin tool entirely. This also makes adding/deploying URL Rewrite rules with your ASP.NET applications really easy.

Step 5: Try the Rule Out

Now that we’ve saved the rule, let’s try it out on our site. Try the following two URLs on my site:

http://scottgu.com/

http://scottgu.com/default.aspx

Notice that the second URL automatically redirects to the first one. Because it is a permanent redirect, search engines will follow the URL and should update the page ranking of http://scottgu.com to include links to http://scottgu.com/default.aspx as well.
Scenario 2: Different URL Casing

Another common SEO problem I discussed earlier in this post is that URLs are case sensitive to search engines on the web. This means that search engines will treat the following links as two completely different URLs:

http://scottgu.com/Albums.aspx

http://scottgu.com/albums.aspx

We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL to instead go to the second (all lower-case) one. Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.

To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again. This will cause the “Add Rule” dialog to appear again:

image

Unlike the previous scenario (where we created a “Blank Rule”), with this scenario we can take advantage of a built-in “Enforce lowercase URLs” rule template. When we click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that enforces the use of lowercase letters in URLs:

image

When we click the “Yes” button we’ll get a pre-written rule that automatically performs a permanent redirect if an incoming URL has upper-case characters in it – and automatically send users to a lower-case version of the URL:

image

We can click the “Apply” button to use this rule “as-is” and have it apply to all incoming URLs to our site.

Because my www.scottgu.com site uses ASP.NET Web Forms, I’m going to make one small change to the rule we generated above – which is to add a condition that will ensure that URLs to ASP.NET’s built-in “WebResource.axd” handler are excluded from our case-sensitivity URL Rewrite logic. URLs to the WebResource.axd handler will only come from server-controls emitted from my pages – and will never be linked to from external sites. While my site will continue to function fine if we redirect these URLs to automatically be lower-case – doing so isn’t necessary and will add an extra HTTP redirect to many of my pages.

The good news is that adding a condition that prevents my URL Rewriting rule from happening with certain URLs is easy. We simply need to expand the “Conditions” section of the form above

image

We can then click the “Add” button to add a condition clause. This will bring up the “Add Condition” dialog:

image

Above I’ve entered {URL} as the Condition input – and said that this rule should only execute if the URL does not match a regex pattern which contains the string “WebResource.axd”. This will ensure that WebResource.axd URLs to my site will be allowed to execute just fine without having the URL be re-written to be all lower-case.

Note: If you have static resources (like references to .jpg, .css, and .js files) within your site that currently use upper-case characters you’ll probably want to add additional condition filter clauses so that URLs to them also don’t get redirected to be lower-case (just add rules for patterns like .jpg, .gif, .js, etc). Your site will continue to work fine if these URLs get redirected to be lower case (meaning the site won’t break) – but it will cause an extra HTTP redirect to happen on your site for URLs that don’t need to be redirected for SEO reasons. So setting up a condition clause makes sense to add.

When I click the “ok” button above and apply our lower-case rewriting rule the admin tool will save the following additional rule to our web.config file:
























Try the Rule Out

Now that we’ve saved the rule, let’s try it out on our site. Try the following two URLs on my site:

http://scottgu.com/Albums.aspx

http://scottgu.com/albums.aspx

Notice that the first URL (which has a capital “A”) automatically does a redirect to a lower-case version of the URL.
Scenario 3: Trailing Slashes

Another common SEO problem I discussed earlier in this post is the scenario of trailing slashes within URLs. The trailing slash creates yet another situation that causes search engines to treat the URLs as different and so split search rankings:

http://scottgu.com

http://scottgu.com/

We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that does not have a trailing slash) to instead go to the second one that does. Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.

To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again. This will cause the “Add Rule” dialog to appear again:

image

The URL Rewrite admin tool has a built-in “Append or remove the trailing slash symbol” rule template.

When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a rule that automatically redirects users to a URL with a trailing slash if one isn’t present:

image

When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL doesn’t have a trailing slash – and if the URL is not processed by either a directory or a file.

Like within our previous lower-casing rewrite rule we’ll add one additional condition clause that will exclude WebResource.axd URLs from being processed by this rule. This will avoid an unnecessary redirect for happening for those URLs.

This will save the following additional rule to our web.config file:


































Try the Rule Out

Now that we’ve saved the rule, let’s try it out on our site. Try the following two URLs on my site:

http://scottgu.com

http://scottgu.com/

Notice that the first URL (which has no trailing slash) automatically does a redirect to a URL with the trailing slash. Because it is a permanent redirect, search engines will follow the URL and update the page ranking.
Scenario 4: Canonical Host Names

The final SEO problem I discussed earlier are scenarios where a site works with both a leading “www” hostname prefix as well as just the hostname itself. This causes search engines to treat the URLs as different and split search rankling:

http://www.scottgu.com/albums.aspx

http://scottgu.com/albums.aspx

We can fix this by adding a new IIS Rewrite rule that automatically redirects anyone who navigates to the first URL (that has a www prefix) to instead go to the second URL. Like before, we will setup the HTTP redirect to be a “permanent redirect” – which will indicate to search engines that they should follow the redirect and use the new URL they are redirected to as the identifier of the content they retrieve.

To create such a rule we’ll click the “Add Rule” link in the URL Rewrite admin tool again. This will cause the “Add Rule” dialog to appear again:

image

The URL Rewrite admin tool has a built-in “Canonical domain name” rule template.

When we select it and click the “ok” button we’ll see the following dialog which asks us if we want to create a redirect rule that automatically redirects users to a primary host name URL:

image

Above I’m entering the primary URL address I want to expose to the web: scottgu.com. When we click the “OK” button we’ll get a pre-written rule that automatically performs a permanent redirect if the URL has another leading domain name prefix.

This will save the following additional rule to our web.config file:










































Try the Rule Out

Now that we’ve saved the rule, let’s try it out on our site. Try the following two URLs on my site:

http://www.scottgu.com/albums.aspx

http://scottgu.com/albums.aspx

Notice that the first URL (which has the “www” prefix) now automatically does a redirect to the second URL which does not have the www prefix. Because it is a permanent redirect, search engines will follow the URL and update the page ranking.

4 Simple Rules for Improved SEO

The above 4 rules are pretty easy to setup and should take less than 15 minutes to configure on existing sites you already have.

The beauty of using a solution like the URL Rewrite Extension is that you can take advantage of it without having to change code within your web-site – and without having to break any existing links already pointing at your site. Users who follow existing links will be automatically redirected to the new URLs you wish to publish. And search engines will start to give your site a higher search relevancy ranking – which will list your site higher in search results and drive more traffic to it.

Customizing your URL Rewriting rules further is easy to-do either by editing the web.config file directly, or alternatively, just double click the URL Rewrite icon within the IIS 7.x admin tool and it will list all the active rules for your web-site or application:

image

Clicking any of the rules above will open the rules editor back up and allow you to tweak/customize/save them further.
Summary

Measuring and improving SEO is something every developer building a public-facing web-site needs to think about and focus on. If you haven’t already, download and use the SEO Toolkit to analyze the SEO of your sites today.

New URL Routing features in ASP.NET MVC and ASP.NET Web Forms 4 make it much easier to build applications that have more control over the URLs that are published. Tools like the URL Rewrite Extension that I’ve talked about in this blog post make it much easier to improve the URLs that are published from sites you already have built today – without requiring you to change a lot of code.

The URL Rewrite Extension provides a bunch of additional great capabilities – far beyond just SEO - as well. I’ll be covering these additional capabilities more in future blog posts.

Hope this helps,

Scott
1. What is a Database?
A database is a logically coherent collection of data with some inherent meaning, representing
some aspect of real world and which is designed, built and populated with data for a specific
purpose.
2. What is DBMS?
It is a collection of programs that enables user to create and maintain a database. In other words
it is general-purpose software that provides the users with the processes of defining, constructing
and manipulating the database for various applications.
3. What is a Database system?
The database and DBMS software together is called as Database system.
4. Advantages of DBMS?
Redundancy is controlled. Unauthorised access is restricted. Providing multiple user interfaces.
Enforcing integrity constraints. Providing backup and recovery.
5. Disadvantage in File Processing System?
· Data redundancy & inconsistency.
· Difficult in accessing data.
· Data isolation.
· Data integrity.
· Concurrent access is not possible.
· Security Problems.
6. Describe the three levels of data abstraction?
The are three levels of abstraction:
1. Physical level: The lowest level of abstraction describes how data are stored.
2. Logical level: The next higher level of abstraction, describes what data are stored in
database and what relationship among those data.
3. View level: The highest level of abstraction describes only part of entire database.
7. Define the "Integrity Rules"
There are two Integrity rules.
1. Entity Integrity: States that ?Primary key cannot have NULL value?
2. Referential Integrity: States that ?Foreign Key can be either a NULL value or should be
Primary Key value of other relation.