Eval Harness

Github Repo: https://github.com/EleutherAI/lm-evaluation-harness

GPT-Neo

GPT-Neo is an implementation of model & data-parallel GPT-2 and GPT-3-like models, utilizing Mesh Tensorflow for distributed support. This codebase is designed for TPUs. It should also work on GPUs, though we do not recommend this hardware configuration. Progress: GPT-Neo should be feature complete. We are making bugfixes, but we do not expect to make any significant changes. As of 2021-03-21, 1.3B and 2.7B parameter GPT-Neo models are available to be run with GPT-Neo....

GPT-NeoX

GPT-NeoX is an implementation of 3D-parallel GPT-3-like models on distributed GPUs, based upon DeepSpeed and Megatron-LM. Progress: As of 2021-03-31, the codebase is fairly stable. DeepSpeed, 3D-parallelism and ZeRO are all working properly. Next Steps: We are currently waiting for CoreWeave to finish building the final hardware we’ll be training on. In the meantime, we are optimizing GPT-NeoX to run as efficiently as possible on that hardware.

OpenWebText2

WebText is an internet dataset created by scraping URLs extracted from Reddit submissions with a minimum score of 3 as a proxy for quality. It was collected for training the original GPT-2 and never released to the public, however researchers independently reproduced the pipeline and released the resulting dataset, called OpenWebTextCorpus (OWT). OpenWebText2 is an enhanced version of the original OpenWebTextCorpus covering all Reddit submissions from 2005 up until April 2020, with further months becoming available after the corresponding PushShift dump files are released....

The Pile

The Pile is a large, diverse, open source language modelling data set that consists of many smaller datasets combined together. The objective is to obtain text from as many modalities as possible to ensure that models trained using The Pile will have much broader generalization abilities. The Pile is now live! Download now, or you can read the docs