项目作者: jonasgrebe

项目描述 :
Implementation of related angular-margin-based classification loss functions for training (face) embedding models: SphereFace, CosFace, ArcFace and MagFace.
高级语言: Python
项目地址: git://github.com/jonasgrebe/pt-femb-face-embeddings.git
创建时间: 2021-03-31T13:02:30Z
项目社区:https://github.com/jonasgrebe/pt-femb-face-embeddings

开源协议:

下载


WIP: femb - Simple Face Embedding Training Library

  1. from femb.backbones import build_backbone
  2. from femb.headers import ArcFaceHeader
  3. from femb import FaceEmbeddingModel
  4. # build the backbone embedding network
  5. backbone = build_backbone(backbone="iresnet18", embed_dim=embed_dim)
  6. # create one of the face recognition headers
  7. # header = ArcFaceHeader(in_features=embed_dim, out_features=train_n_classes)
  8. header = MagFaceHeader(in_features=embed_dim, out_features=train_n_classes)
  9. # create the ce loss
  10. loss = torch.nn.CrossEntropyLoss()
  11. # create the face recognition model wrapper
  12. face_model = FaceEmbeddingModel(backbone=backbone, header=header, loss=loss)

Basic Framework:

  • Backbone: The actual embedding network that we want to train. It takes some kind of input and produces a feature representation (embedding) of a certain dimensionality.
  • Header: A training-only extension to the backbone network that is used to predict the identity class logits for the loss function. This is the main part where the implemented methods (SphereFace, CosFace, …) differ.
  • Loss: The loss function that is used to judge how good the (manipulated) logits match the one-hot encoded identity target. Usually, this is the cross-entropy loss.
  1. from femb.evaluation import VerificationEvaluator
  2. # create the verification evaluator
  3. evaluator = VerificationEvaluator(similarity='cos')
  4. # specify the optimizer (and a scheduler)
  5. optimizer = torch.optim.SGD(params=face_model.params, lr=1e-2, momentum=0.9, weight_decay=5e-4)
  6. scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer=optimizer, milestones=[8000, 10000, 160000], gamma=0.1)
  7. # fit the face embedding model to the dataset
  8. face_model.fit(
  9. train_dataset=train_dataset, # specify the training set
  10. batch_size=32, # batch size for training and evaluation
  11. device='cuda', # torch device, i.e. 'cpu' or 'cuda'
  12. optimizer=optimizer, # torch optimizer
  13. lr_epoch_scheduler=None, # scheduler based on epochs
  14. lr_global_step_scheduler=scheduler, # scheduler based on global steps
  15. evaluator=evaluator, # evaluator module
  16. val_dataset=val_dataset, # specify the validation set
  17. evaluation_steps=10, # number of steps between evaluations
  18. max_training_steps=20000, # maximum number of (global) training steps (if zero then max_epochs count is used for stopping)
  19. max_epochs=0, # maximum number of epochs (if zero then max_training_steps is used for stopping)
  20. tensorboard=True # specify whether or not tensorboard shall be used for embedding projections and metric monitoring
  21. )

Implemented Losses

  • SoftMax Loss: (LinearHeader: )
  • SphereFace Loss: Paper Code (SphereFaceHeader: )
  • CosFace Loss: Paper Code (CosFaceHeader: )
  • ArcFace Loss: Paper Code (ArcFaceHeader: )
  • MagFace Loss: Paper Code (MagFaceHeader: )

TODOS

  • Add links to papers
  • Add inference methods to model.py
  • Add comments and documentation
  • Refactor code
  • Test implementation