Customers can now tap into the latest Nvidia H100 Tensor Core GPUs, perfectly suited for training foundation models and large language models. They have the flexibility to specify the cluster size and duration, ensuring that they only pay for the resources they need.
Amazon highlighted the growing demand for GPUs in tandem with the proliferation of generative AI. Many businesses grapple with the dilemma of either overpaying for excess service or having GPUs idling when not in use, which can lead to inefficiencies.