项目作者: samclarke

项目描述 :
NodeJS robots.txt parser with support for wildcard (*) matching.
高级语言: JavaScript
项目地址: git://github.com/samclarke/robots-parser.git
创建时间: 2014-09-27T12:25:33Z
项目社区:https://github.com/samclarke/robots-parser

开源协议:MIT License

下载


Robots Parser NPM downloads DeepScan grade GitHub license Coverage Status

A robots.txt parser which aims to be complaint with the RFC 9309 specification.

The parser currently supports:

  • User-agent:
  • Allow:
  • Disallow (with explicit mode support):
  • Sitemap:
  • Crawl-delay:
  • Host:
  • Paths with wildcards (*) and EOL matching ($)

Installation

Via NPM:

  1. npm install robots-parser

or via Yarn:

  1. yarn add robots-parser

Usage

  1. var robotsParser = require('robots-parser');
  2. var robots = robotsParser('http://www.example.com/robots.txt', [
  3. 'User-agent: *',
  4. 'Disallow: /dir/',
  5. 'Disallow: /test.html',
  6. 'Allow: /dir/test.html',
  7. 'Allow: /test.html',
  8. 'Crawl-delay: 1',
  9. 'Sitemap: http://example.com/sitemap.xml',
  10. 'Host: example.com'
  11. ].join('\n'));
  12. robots.isAllowed('http://www.example.com/test.html', 'Sams-Bot/1.0'); // true
  13. robots.isAllowed('http://www.example.com/dir/test.html', 'Sams-Bot/1.0'); // true
  14. robots.isDisallowed('http://www.example.com/dir/test2.html', 'Sams-Bot/1.0'); // true
  15. robots.isExplicitlyDisallowed('http://www.example.com/dir/test2.html', 'Sams-Bot/1.0'); // false
  16. robots.getCrawlDelay('Sams-Bot/1.0'); // 1
  17. robots.getSitemaps(); // ['http://example.com/sitemap.xml']
  18. robots.getPreferredHost(); // example.com

isAllowed(url, [ua])

boolean or undefined

Returns true if crawling the specified URL is allowed for the specified user-agent.

This will return undefined if the URL isn’t valid for this robots.txt.

isDisallowed(url, [ua])

boolean or undefined

Returns true if crawling the specified URL is not allowed for the specified user-agent.

This will return undefined if the URL isn’t valid for this robots.txt.

isExplicitlyDisallowed(url, ua)

boolean or undefined

[!CAUTION]
This is not part of the robots.txt specification and should only be used with
the websites owners permission.
This method is only intended for special purposes where a user-agent shouldn’t
fallback to matching against global (*) rules.

An example of this behaviour is Google AdsBot
which must be explicitly excluded. This is done with the website owners permission.

Returns trues if explicitly disallowed for the specified user agent (User Agent wildcards are discarded).

This will return undefined if the URL is not valid for this robots.txt file.

getMatchingLineNumber(url, [ua])

number or undefined

Returns the line number of the matching directive for the specified URL and user-agent if any.

Line numbers start at 1 and go up (1-based indexing).

Returns -1 if there is no matching directive. If a rule is manually added without a lineNumber then this will return undefined for that rule.

getCrawlDelay([ua])

number or undefined

Returns the number of seconds the specified user-agent should wait between requests.

Returns undefined if no crawl delay has been specified for this user-agent.

getSitemaps()

array

Returns an array of sitemap URLs specified by the sitemap: directive.

getPreferredHost()

string or null

Returns the preferred host name specified by the host: directive or null if there isn’t one.

Changes

Version 3.0.1

  • Fixed bug with https: URLs defaulting to port 80 instead of 443 if no port is specified.
    Thanks to @dskvr for reporting

    This affects comparing URLs with the default HTTPs port to URLs without it.
    For example, comparing https://example.com/ to https://example.com:443/ or vice versa.

    They should be treated as equivalent but weren’t due to the incorrect port
    being used for https:.

Version 3.0.0

  • Changed to using global URL object instead of importing. – Thanks to @brendankenny

Version 2.4.0:

  • Added Typescript definitions
    – Thanks to @danhab99 for creating
  • Added SECURITY.md policy and CodeQL scanning

Version 2.3.0:

  • Fixed bug where if the user-agent passed to isAllowed() / isDisallowed() is called “constructor” it would throw an error.
  • Added support for relative URLs. This does not affect the default behavior so can safely be upgraded.

    Relative matching is only allowed if both the robots.txt URL and the URLs being checked are relative.

    For example:

    1. var robots = robotsParser('/robots.txt', [
    2. 'User-agent: *',
    3. 'Disallow: /dir/',
    4. 'Disallow: /test.html',
    5. 'Allow: /dir/test.html',
    6. 'Allow: /test.html'
    7. ].join('\n'));
    8. robots.isAllowed('/test.html', 'Sams-Bot/1.0'); // false
    9. robots.isAllowed('/dir/test.html', 'Sams-Bot/1.0'); // true
    10. robots.isDisallowed('/dir/test2.html', 'Sams-Bot/1.0'); // true

Version 2.2.0:

  • Fixed bug that with matching wildcard patterns with some URLs
    – Thanks to @ckylape for reporting and fixing
  • Changed matching algorithm to match Google’s implementation in google/robotstxt
  • Changed order of precedence to match current spec

Version 2.1.1:

  • Fix bug that could be used to causing rule checking to take a long time
    – Thanks to @andeanfog

Version 2.1.0:

  • Removed use of punycode module API’s as new URL API handles it
  • Improved test coverage
  • Added tests for percent encoded paths and improved support
  • Added getMatchingLineNumber() method
  • Fixed bug with comments on same line as directive

Version 2.0.0:

This release is not 100% backwards compatible as it now uses the new URL APIs which are not supported in Node < 7.

  • Update code to not use deprecated URL module API’s.
    – Thanks to @kdzwinel

Version 1.0.2:

  • Fixed error caused by invalid URLs missing the protocol.

Version 1.0.1:

  • Fixed bug with the “user-agent” rule being treated as case sensitive.
    – Thanks to @brendonboshell
  • Improved test coverage.
    – Thanks to @schornio

Version 1.0.0:

  • Initial release.

License

  1. The MIT License (MIT)
  2. Copyright (c) 2014 Sam Clarke
  3. Permission is hereby granted, free of charge, to any person obtaining a copy
  4. of this software and associated documentation files (the "Software"), to deal
  5. in the Software without restriction, including without limitation the rights
  6. to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
  7. copies of the Software, and to permit persons to whom the Software is
  8. furnished to do so, subject to the following conditions:
  9. The above copyright notice and this permission notice shall be included in
  10. all copies or substantial portions of the Software.
  11. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
  12. IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
  13. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
  14. AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
  15. LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
  16. OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
  17. THE SOFTWARE.