TPU Research Cloud
Accelerate your cutting-edge machine learning research with free Cloud TPUs.
Apply nowLearn more about the TRC program
TRC enables researchers to apply for access to a cluster of more than 1,000 Cloud TPU devices. Researchers accepted into the TRC program will have access to Cloud TPUs at no charge and can leverage a variety of frameworks including TensorFlow, PyTorch, Julia and JAX to accelerate the next wave of open research breakthroughs.
Participants in the TRC program will be expected to share their TRC-supported research with the world through peer-reviewed publications, open source code, blog posts, or other means. They should also be willing to share detailed feedback with Google to help us improve the TRC program and the underlying Cloud TPU platform over time. In addition, participants accept Google's Terms and Conditions, acknowledge that their information will be used in accordance with our Privacy Policy, and agree to conduct their research in accordance with the Google AI principles.
Machine learning researchers around the world have done amazing things with the limited computational resources they currently have available.
We'd like to empower researchers from many different backgrounds to think even bigger and tackle exciting new challenges that would be inaccessible otherwise.
Apply nowUse Cloud TPUs for free, right in your browser
If you'd like to get started with Cloud TPUs right away, you can access them for free in your browser using Google Colab. Colab is a Jupyter notebook environment that requires no setup to use.
We're excited to help researchers and students everywhere expand the machine learning frontier by making Cloud TPUs available for free.
Learn more about Cloud TPUsTo get started, try one of these TPU-compatible notebook examples:
Cloud TPUs: Built to train and run ML models
Cloud TPU hardware accelerators are designed from the ground up to expedite the training and running of machine learning models. The TPU Research Cloud (TRC) provides researchers with access to a pool of thousands of Cloud TPU chips, each of which can provide up to 45 (v2), 123 (v3), or 275 (v4) teraflops of ML acceleration.
Learn more about Cloud TPUs