项目作者: Dexterzhao

项目描述 :
Customized Instagram Scraper
高级语言: Python
项目地址: git://github.com/Dexterzhao/CusInscraper.git
创建时间: 2018-09-27T03:00:58Z
项目社区:https://github.com/Dexterzhao/CusInscraper

开源协议:MIT License

下载


Instagram Scraper

instagram-scraper is a command-line application written in Python that scrapes and downloads an instagram user’s photos and videos. Use responsibly.

Some details:

  1. The program is used for scraping a given location instagram metadata happening recently, as people are posting continuously.

However, when it keeps retrieving, it starts to get older ones as time goes, this can be seen by the name and timestamp of each file

Few new posts posted after the start of scraping are scraped.

  1. It is stored in json format, with 100 posts per file, the file name was named as “year/month/day/hour”

  2. Duplicates might occur, if the program was stopped and restarted very frequently, it could be avoided by adding “—latest”, but this could cause all posts newly scraped are more recent than the old ones.

If the dataset is massive enough, then duplicates should be checked, within the file that store posts in the same hour.

  1. Instagram currently does not provide api for scraping followers, working on scraping from webpages and constructing graphs.

To represent and analyze large graphs, neo4j is a tentative tool.

Install

Clone the project and run the following command to install(With Python 3.7.0b2):
Make sure you cd into the CusInscraper-master folder before performing the command below.

  1. $ python setup.py install

Usage

It is always necessary to provide your username and password if you want to use the scraper.
Of course it is okay to use some throwaway acounts.

To get an id from location name(Usally big cities come first in returning result of id):

  1. $ instagram-scraper -u [your username] -p [your password] --search-location [location name]

For instance, you want to know location-id in los angeles

  1. $ instagram-scraper -u [your username] -p [your password] --search-location los angeles

The query result should be:
los angeles search location

To scrape data from a given location id:

  1. $ instagram-scraper -u [your username] -p [your password] --location [target location id] --me
  2. dia-types none --media-metadata -m [maximum number to scrap] --retry-forver
  3. (--comments might be added for)

For instance, you want to scrap 1000000 posts from los angels,but only want to fetch metadata with comments and locations

  1. $ instagram-scraper -u [your username] -p [your password] --location 212999109 --media-types none --comments --include-location -m 1000000 --retry-forever

By default, downloaded media will be placed in <current working directory>/<location-id>.The data will be stored per hour per day

Breakpoint Recover

  1. Save the following shell code as xxx.sh(Before that, change the instagram-scraper to your own path that has that installed instagram-scraper), the shell script will restart the program when the program carshes(exit code not 0), the endcursor.txt has the breakpoint to inform the program where to restart.
    1. #!/bin/bash
    2. until ./anaconda3/bin/instagram-scraper -u [Username] -p [Password] --location 212999109 --media-types none --media-metadata -m 24000000; do
    3. echo "Instagram-scraper crashed with exit code $?. Respawning.." >&2
    4. sleep 1
    5. done
  2. Change access permisssion, run the following in console
    1. chmod +x xxx.sh
  3. Go to crontab

    1. crontab -e

    Add the text and save file,this will enable the shell script when system is rebooted

    1. @reboot /path/to/shell/xxx.sh
  4. Run the shell script ./xxx.sh

    1. nohup ./xxx.sh &
  5. Termination: Terminate bash script before terminate the program. The program will terminate itself when no more available cursors can be found or the maximum is reached.
    OPTIONS

  1. --help -h Show help message and exit.
  2. --login-user -u Instagram login user.
  3. --login-pass -p Instagram login password.
  4. --filename -f Path to a file containing a list of users to scrape.
  5. --destination -d Specify the download destination. By default, media will
  6. be downloaded to <current working directory>/<username>.
  7. --retain-username -n Creates a username subdirectory when the destination flag is
  8. set.
  9. --media-types -t Specify media types to scrape. Enter as space separated values.
  10. Valid values are image, video, story (story-image & story-video),
  11. or none. Stories require a --login-user and --login-pass to be defined.
  12. --latest Scrape only new media since the last scrape. Uses the last modified
  13. time of the latest media item in the destination directory to compare.
  14. --latest-stamps Specify a file to save the timestamps of latest media scraped by user.
  15. This works similarly to `--latest` except the file specified by
  16. `--latest-stamps` will store the last modified time instead of using
  17. timestamps of media items in the destination directory.
  18. This allows the destination directories to be emptied whilst
  19. still maintaining history.
  20. --quiet -q Be quiet while scraping.
  21. --maximum -m Maximum number of items to scrape.
  22. --media-metadata Saves the media metadata associated with the user's posts to
  23. <destination>/<username>.json. Can be combined with --media-types none
  24. to only fetch the metadata without downloading the media.
  25. --include-location Includes location metadata when saving media metadata.
  26. Implicitly includes --media-metadata.
  27. --comments Saves the comment metadata associated with the posts to
  28. <destination>/<username>.json. Implicitly includes --media-metadata.
  29. --interactive -i Enables interactive login challenge solving. Has 2 modes: SMS and Email
  30. --retry-forever Retry download attempts endlessly when errors are received
  31. --tag Scrapes the specified hashtag for media.
  32. --filter Scrapes the specified hashtag within a user's media.
  33. --location Scrapes the specified instagram location-id for media.
  34. --search-location Search for a location by name. Useful for determining the location-id of
  35. a specific place.
  36. --template -T Customize and format each file's name.
  37. Default: {urlname}
  38. Options:
  39. {username}: Scraped user
  40. {shortcode}: Post shortcode (profile_pic and story are empty)
  41. {urlname}: Original file name from url.
  42. {mediatype}: The type of media being downloaded.
  43. {datetime}: Date and time of upload. (Format: 20180101 01h01m01s)
  44. {date}: Date of upload. (Format: 20180101)
  45. {year}: Year of uplaod. (Format: 2018)
  46. {month}: Month of upload. (Format: 01-12)
  47. {day}: Day of upload. (Format: 01-31)
  48. {h}: Hour of upload. (Format: 00-23h)
  49. {m}: Minute of upload. (Format: 00-59m)
  50. {s}: Second of upload. (Format: 00-59s)
  51. If the template is invalid, it will revert to the default.
  52. Does not work with --tag and --location.

Develop

Clone the repo and create a virtualenv

  1. $ virtualenv venv
  2. $ source venv/bin/activate
  3. $ python setup.py develop

Running Tests

  1. $ python setup.py test
  2. # or just
  3. $ nosetests