Image Captioning using LSTM and Deep Learning on Flickr8K dataset.
https://github.com/jbrownlee/Datasets/releases/download/Flickr8k/Flickr8k_Dataset.zip
https://github.com/jbrownlee/Datasets/releases/download/Flickr8k/Flickr8k_text.zip
If facing Github rendering issues use:
https://nbviewer.org/github/AmritK10/Image_Captioning/blob/master/image_captioning.ipynb
ResNet50 was used as an image encoder to encode the images which were then input in the model.
Keras embedding layer was used to generate word embeddings on the captions which were encoded earlier.
The embeddings were then passed into LSTM after which the image and text features were combined and sent to a decoder network
to generate the next word.
Greedy Search and Beam Search both were used to generate the captions.
Bleu Score was used to evaluate the captions generated.
Greedy Search: 0.4776
Beam Search with k=3: 0.4930