--- title: README emoji: š¢ colorFrom: purple colorTo: gray sdk: static pinned: false ---
Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
Intel optimizes widely adopted and innovative AI software tools, frameworks, and libraries for IntelĀ® architecture. Whether you are computing locally or deploying AI applications on a massive scale, your organization can achieve peak performance with AI software optimized for IntelĀ® XeonĀ® Scalable platforms.
Intelās engineering collaboration with Hugging Face offers state-of-the-art hardware and software acceleration to train, fine-tune and predict with Transformers.
To get started with Intel hardware and software optimizations, download and install the Optimum Intel and IntelĀ® Extension for Transformers libraries. Follow these documents to learn how to install and use these libraries:
The Optimum Intel library provides primarily hardware acceleration, while the IntelĀ® Extension for Transformers is focused more on software accleration. Both should be present to achieve ideal performance and productivity gains in transfer learning and fine-tuning with Hugging Face.
Next, find your desired model (and dataset) by using the search box at the top-left of Hugging Faceās website. Add āintelā to your search to narrow your search to models pretrained by Intel.
On the modelās page (called a āModel Cardā) you will find description and usage information, an embedded inferencing demo, and the associated dataset. In the upper-right of your screen, click āUse in Transformersā for helpful code hints on how to import the model to your own workspace with an established Hugging Face pipeline and tokenizer.