feature: Instruct search engines not to index node pages

Provide HTTP header to search engines not to index the page.

Eventually we will likely want to add nofollow and a robots file
but first we need to get this pushed out and into search engine hands
so they delist the entires that may already be cached.

Change-Id: I47a68fcc8fb69c43c3f1bbe2b8a04a3e8d51571a
This commit is contained in:
Conrad Lara - KG6JEI 2016-08-16 18:15:06 -07:00
parent 7de59de3b9
commit 3962465066
1 changed files with 1 additions and 0 deletions

View File

@ -61,6 +61,7 @@ sub html_header
print "<meta http-equiv='expires' content='0'>\n"; print "<meta http-equiv='expires' content='0'>\n";
print "<meta http-equiv='cache-control' content='no-cache'>\n"; print "<meta http-equiv='cache-control' content='no-cache'>\n";
print "<meta http-equiv='pragma' content='no-cache'>\n"; print "<meta http-equiv='pragma' content='no-cache'>\n";
print "<meta name='robots' content='noindex'>";
# set up the style sheet # set up the style sheet
mkdir "/tmp/web" unless -d "/tmp/web"; # make sure /tmp/web exists mkdir "/tmp/web" unless -d "/tmp/web"; # make sure /tmp/web exists