Below is a snapshot of the Web page as it appeared on 5/26/2024 (the last time our crawler visited it). This is the version of the page that was used for ranking your search results. The page may have changed since we last cached it. To see what might have changed (without the highlights), go to the current page.
Bing is not responsible for the content of this page.
Write With Transformer
Write With Transformer
Get a modern neural network to auto-complete your thoughts.
This web app, built by the Hugging Face team, is the official demo of the
๐ค/transformers
repository's text generation capabilities.
The almighty king of text generation, GPT-2 comes in four available sizes, only three of which have been publicly made available. Feared for its fake news generation capabilities,
it currently stands as the most syntactically coherent model. A direct successor to the original GPT, it reinforces the already established pre-training/fine-tuning killer duo.
From the paper: Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever.
Overcoming the unidirectional limit while maintaining an independent masking algorithm based on permutation, XLNet improves upon the state-of-the-art autoregressive model that is TransformerXL. Using a bidirectional context while keeping its autoregressive approach, this model outperforms BERT on 20 tasks while keeping an impressive generative coherence.
From the paper: XLNet: Generalized Autoregressive Pretraining for Language Understanding, by Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov and Quoc V. Le.
Released by OpenAI, this seminal architecture has shown that large gains on several NLP tasks can be achieved by generative pre-training a language model
on unlabeled text before fine-tuning it on a downstream task.
From the paper: Improving Language Understanding by Generative Pre-Training, by Alec Radford, Karthik Naraimhan, Tim Salimans and Ilya Sutskever.