Build handsfree User Experiences and add face, hand, and pose tracking to your projects in a snap ✨👌

// Enable Mediapipe's "Hands" model
const handsfree = new Handsfree({hands: true})
// Enable plugins tagged with "browser"
// Start tracking
Demo: Scroll pages handsfree

Scroll pages handsfree!

  • 👌 Pinch your thumb and index to grab the page
  • ↕ While pinched, move hand up and down to scroll page
Powered by

# Installing

  • CDN
  • NPM
  • Note: Some models are over 10Mb+ and may take a few seconds to load.

      <!-- Include Handsfree.js -->
      <link rel="stylesheet" href="https://unpkg.com/handsfree@8.5.1/build/lib/assets/handsfree.css" />
      <script src="https://unpkg.com/handsfree@8.5.1/build/lib/handsfree.js"></script>
      <!-- Instantiate and start it -->
        const handsfree = new Handsfree({hands: true})

    # Models

    Each of the following models can be combined and reconfigured in real time.

    Model: MediaPipe Hands (2D)

    📚 MediaPipe Hands documentation

    • 21 2D hand landmarks per hand
    • Track up to 4 hands at once
    • Pinching states, hand pointers, and gestures
    Model: TensorFlow Handpose (3D)

    📚 TensorFlow Handpose documentation

    • 21 3D hand landmarks
    • Can only track 1 hand at a time
    • 📅 Extra helpers and plugins coming soon
    Model: MediaPipe FaceMesh

    📚 MediaPipe FaceMesh documentation

    • 468 2D face landmarks
    • Track up to 4 faces at once
    • 📅 Extra helpers and plugins coming soon
    Model: MediaPipe Pose

    📚 MediaPipe Pose documentation

    • Full body mode with 33 2D pose landmarks
    • Upper body mode with 25 2D upper pose landmarks
    • 📅 Extra helpers and plugins coming soon
    Model: TensorFlow Handpose

    📚 Jeeliz Weboji documentation

    • 6DOF head pose estimations
    • 11 face morphs and 16 helper states
    • Comes with "Face Pointer" based plugins

    # Quickstart Workflow

    The following workflow demonstrates how to use all features of Handsfree.js. Check out the Guides and References to dive deeper, and feel free to post on the Google Groups (opens new window) or Discord (opens new window) if you get stuck!

    // Let's enable face tracking with the default Face Pointer
    const handsfree = new Handsfree({weboji: true})
    // Now let's start things up
    // Let's create a plugin called "logger"
    // - Plugins run on every frame and is how you "plug in" to the main loop
    // - "this" context is the plugin itself. In this case, handsfree.plugin.logger
    handsfree.use('logger', data => {
      console.log(data.weboji.morphs, data.weboji.rotation, data.weboji.pointer, data, this)
    // Let's switch to hand tracking now. To demonstrate that you can do this live,
    // let's create a plugin that switches to hand tracking when both eyebrows go up
    handsfree.use('handTrackingSwitcher', ({weboji}) => {
      if (weboji.state.browsUp) {
        // Disable this plugin
        // Same as handsfree.plugin.handTrackingSwitcher.disable()
        // Turn off face tracking and enable hand tracking
          weboji: false,
          hands: true
    // You can enable and disable any combination of models and plugins
      // Disable weboji which is currently running
      weboji: false,
      // Start the pose model
      pose: true,
      // This is also how you configure (or pre-configure) a bunch of plugins at once
      plugin: {
        fingerPointer: {enabled: false},
        faceScroll: {
          vertScroll: {
            scrollSpeed: 0.01
    // Disable all plugins
    // Enable only the plugins for making music (not actually implemented yet)
    // Overwrite our logger to display the original model APIs
    handsfree.plugin.logger.onFrame = (data) => {
      console.log(handsfree.model.pose?.api, handsfree.model.weboji?.api, handsfree.model.pose?.api)