项目作者: guokr

项目描述 :
Caver: a toolkit for multilabel text classification.
高级语言: Python
项目地址: git://github.com/guokr/Caver.git
创建时间: 2018-07-11T08:07:39Z
项目社区:https://github.com/guokr/Caver

开源协议:GNU General Public License v3.0

下载


Caver

Rising a torch in the cave to see the words on the wall, tag your short text in 3 lines. Caver uses Facebook’s PyTorch project to make the implementation easier.



Pypi package


GitHub release


GitHub issues


Travis CI


Demo
Requirements
Install
Pre-trained models
Train
Examples
Document



Quick Demo

  1. from caver import CaverModel
  2. model = CaverModel("./checkpoint_path")
  3. sentence = ["看 美 剧 学 英 语 靠 谱 吗",
  4. "科 比 携 手 姚 明 出 任 2019 篮 球 世 界 杯 全 球 大 使",
  5. "如 何 在 《 权 力 的 游 戏 》 中 苟 到 最 后",
  6. "英 雄 联 盟 LPL 夏 季 赛 RNG 能 否 击 败 TOP 战 队"]
  7. model.predict([sentence[0]], top_k=3)
  8. >>> ['美剧', '英语', '英语学习']
  9. model.predict([sentence[1]], top_k=5)
  10. >>> ['篮球', 'NBA', '体育', 'NBA 球员', '运动']
  11. model.predict([sentence[2]], top_k=7)
  12. >>> ['权力的游戏(美剧)', '美剧', '影视评论', '电视剧', '电影', '文学', '小说']
  13. model.predict([sentence[3]], top_k=6)
  14. >>> ['英雄联盟(LoL)', '电子竞技', '英雄联盟职业联赛(LPL)', '游戏', '网络游戏', '多人联机在线竞技游戏 (MOBA)']

Requirements

  • PyTorch
  • tqdm
  • torchtext
  • numpy
  • Python3

Install

  1. $ pip install caver --user

Did you guys have some pre-trained models

Yes, we have released two pre-trained models on Zhihu NLPCC2018 opendataset.

If you want to use the pre-trained model for performing text tagging, you can download it (along with other important inference material) from the Caver releases page. Alternatively, you can run the following command to download and unzip the files in your current directory:

  1. $ wget -O - https://github.com/guokr/Caver/releases/download/0.1/checkpoints_char_cnn.tar.gz | tar zxvf -
  2. $ wget -O - https://github.com/guokr/Caver/releases/download/0.1/checkpoints_char_lstm.tar.gz | tar zxvf -

How to train on your own dataset

  1. $ python3 train.py --input_data_dir {path to your origin dataset}
  2. --output_data_dir {path to store the preprocessed dataset}
  3. --train_filename train.tsv
  4. --valid_filename valid.tsv
  5. --checkpoint_dir {path to save the checkpoints}
  6. --model {fastText/CNN/LSTM}
  7. --batch_size {16, you can modify this for you own}
  8. --epoch {10}

More Examples

It’s updating, but basically you can check examples.