MediaPipe is the simplest way for researchers and developers to build world-class ML solutions and applications for mobile, desktop/cloud, web and IoT devices.
|End-to-End acceleration: built-in fast ML inference and processing accelerated even on common hardware||Build one, deploy anywhere: Unified solution works across Android, iOS, desktop/cloud, web and IoT|
|Ready-to-use solutions: Cutting-edge ML solutions demonstrating full power of the framework||Free and open source: Framework and solutions both under Apache 2.0, fully extensible and customizable|
|Face Detection||Face Mesh||Iris||Hands||Pose||Hair Segmentation|
|Object Detection||Box Tracking||Instant Motion Tracking||Objectron||KNIFT|
|Instant Motion Tracking||✅|
See also MediaPipe Models and Model Cards for ML models released in MediaPipe.
MediaPipe on the Web is an effort to run the same ML solutions built for mobile and desktop also in web browsers. The official API is under construction, but the core technology has been proven effective. Please see MediaPipe on the Web in Google Developers Blog for details.
You can use the following links to load a demo in the MediaPipe Visualizer, and over there click the "Runner" icon in the top bar like shown below. The demos use your webcam video as input, which is processed all locally in real-time and never leaves your device.
MediaPipe is currently in alpha at v0.7. We may be still making breaking API changes and expect to get to stable APIs by v1.0.
We welcome contributions. Please follow these guidelines.
We use GitHub issues for tracking requests and bugs. Please post questions to
the MediaPipe Stack Overflow with a