Collaboration among AWS and Hugging Face | | | | Turtles AI
Collaboration among AWS and Hugging Face
DukeRem
Amazon Web Services (AWS) and Hugging Face have announced an expanded collaboration to accelerate the training, fine-tuning, and deployment of large language (LLM) and vision models for generative AI applications. These applications can perform tasks such as text summarization, answering questions, code generation, image creation, and writing essays and articles.
AWS has a history of innovation in generative AI, using it to deliver conversational experiences with Alexa and increasingly using this tool for new experiences like Create with Alexa. Additionally, AWS has created purpose-built ML accelerators for the training and inference of large language and vision models.
Hugging Face chose to collaborate with AWS due to its flexibility and state-of-the-art tools, such as Amazon SageMaker, AWS Trainium, and AWS Inferentia. This collaboration aims to make it easier for developers to access AWS services and deploy Hugging Face models specifically for generative AI applications, resulting in faster training and scaling.
Developers can use AWS Trainium and AWS Inferentia through managed services such as Amazon SageMaker, or they can self-manage on Amazon EC2. Customers can start using Hugging Face models on AWS through SageMaker JumpStart, the Hugging Face AWS Deep Learning Containers, or tutorials to deploy models to AWS Trainium or AWS Inferentia.
The Hugging Face DLC is packed with optimized transformers, datasets, and tokenizers libraries, allowing developers to fine-tune and deploy generative AI applications at scale in hours instead of weeks. SageMaker JumpStart and the Hugging Face DLCs are available in all regions where Amazon SageMaker is available and come at no additional cost.