Recently multimodal contrastive models have had an explosion in power and popularity, e.g., ConVIRT, CLIP, and ALIGN. In this project we apply a similar setup but use amino acid sequences and their language description as our training data from the Universal Protein Resource (UniProt), an annotated protein database. The goal is to create a model that can be used like other CLIP-like models but for amino acid sequences and text.

GitHub Repo: https://github.com/MicPie/clasp