# Prop: handsfree.config
Contains a sanitized copy of the object you instantiated Handsfree with:
const config = {}
const handsfree = new Handsfree(config)
// Since you passed an empty object, this will contain all the defaults
console.log(handsfree.config)
The sanitization process simply adds default values for any options you specifically did not provide. Passing an empty object will result in handsfree.config
having all the defaults listed below. The recommended way to update this config is with handsfree.update
# Setup
# .assetsPath
Default: https://unpkg.com/handsfree@8.5.1/build/lib/assets
In order to keep page loads snappy the models are loaded only when needed, and because Handsfree.js is designed to power webapps they are hosted on a CDN. However, you can click here to download a zip file containing the models (opens new window) and copy over the /build/lib/assets/
folder into your projects public folder to host them yourself.
With your models extracted, set the assetsPath
to your folder:
const handsfree = new Handsfree({
weboji: true,
assetsPath: '/my/public/assets/'
})
handsfree.start()
If there's an error, a modelError event will be triggered and along with console message which you can use to zero in on the correct folder.
# .isClient
Default: false
Setting this to true
will cause Handsfree.js to only load the plugins, and disables the loop. This is useful for when you want to run computer vision on another device or context, but run the plugins on the current device or context.
A common use case is to run Handsfree.js in the browser and stream the data to the desktop via websockets, for example, to control the desktop mouse pointer. Another use case is to run Handsfree.js plugins on a low powered device while running the models externally on a device with a GPU.
You'll need to manually call handsfree.runPlugins(data) on the local device/context on each frame as there will be no loop.
# .setup.canvas[modelName]
Default:
{
// The canvas element to hold the skeletons and keypoints
// - Will automatically get created and injected into .setup.wrap if null
$el: null,
// These are currently automatically set
// width: 1280,
// height: 720
}
# .setup.video
Default:
{
// The video element to hold the webcam stream
// - Will automatically get created and injected into .setup.wrap if null
$el: null,
// These are currently automatically set by the model (see the model config)
// width: 1280,
// height: 720
}
# .setup.wrap
Default:
{
// The element that holds the video and canvas overlay
// - Will automatically get created and injected into .setup.wrap if null
$el: null,
// The element to inject the setup wrapper into
$parent: document.body
}
# Models
# .hands
(2D)
See the Hands Model page
# .handpose
(3D)
See the Handpose Model page
# .facemesh
See the Facemesh Model page
# .pose
See the Pose Model page
# .weboji
See the Weboji Model page
# Plugins
See the individual plugin pages for possible configs. Like with models, you can pass in a Boolean to enable/disable them or an object to configure specific properties (don't forget to .enable
it if you'd like it enabled):
handsfree = new Handsfree({
hands: true,
weboji: true,
plugin: {
// Enable this plugin with defaults
facePointer: true,
// Enable this plugin with specific configs
pinchScroll: {
enabled: true,
speed: 1
}
}
})
# The Full List
The following is a copy of the actual default object used by Handsfree.js. This page will be better organized, but for now please refer to the defaults below:
/**
* The following are all the defaults
*
* @see https://handsfree.js.org/ref/prop/config
*/
export default {
// Whether to automatically start or not
// This works both during instantiation or with .update()
autostart: false,
// Use CDN by default
assetsPath: 'https://unpkg.com/handsfree@8.5.1/build/lib/assets',
// This will load everything but the models. This is useful when you want to use run inference
// on another device or context but run the plugins on the current device
isClient: false,
// Gesture config
gesture: {},
// Setup config. Ignore this to have everything done for you automatically
setup: {
// The canvas element to use for rendering debug info like skeletons and keypoints
canvas: {
weboji: {
// The canvas element to hold the skeletons and keypoints for weboji model
$el: null,
width: 1280,
height: 720
},
hands: {
// The canvas element to hold the skeletons and keypoints for hand model
$el: null,
width: 1280,
height: 720
},
handpose: {
// The canvas element to hold the skeletons and keypoints for hand model
$el: null,
width: 1280,
height: 720
},
pose: {
// The canvas element to hold the skeletons and keypoints for pose model
$el: null,
width: 1280,
height: 720
},
facemesh: {
// The canvas element to hold the skeletons and keypoints for facemesh model
$el: null,
width: 1280,
height: 720
}
},
// The video source to use.
// - If not present one will be created and use the webcam
// - If present without a source then the webcam will be used
// - If present with a source then that source will be used instead of the webcam
video: {
// The video element to hold the webcam stream
$el: null,
width: 1280,
height: 720
},
// The wrapping element
wrap: {
// The element to put the video and canvas inside of
$el: null,
// The parent element
$parent: null
}
},
// Weboji model
weboji: {
enabled: false,
throttle: 0,
videoSettings: {
// The video, canvas, or image element
// Omit this to auto create a <VIDEO> with the webcam
videoElement: null,
// ID of the device to use
// Omit this to use the system default
deviceId: null,
// Which camera to use on the device
// Possible values: 'user' (front), 'environment' (back)
facingMode: 'user',
// Video dimensions
idealWidth: 320,
idealHeight: 240,
minWidth: 240,
maxWidth: 1280,
minHeight: 240,
maxHeight: 1280
},
// Thresholds needed before these are considered "activated"
// - Ranges from 0 (not active) to 1 (fully active)
morphs: {
threshold: {
smileRight: 0.7,
smileLeft: 0.7,
browLeftDown: 0.8,
browRightDown: 0.8,
browLeftUp: 0.8,
browRightUp: 0.8,
eyeLeftClosed: 0.4,
eyeRightClosed: 0.4,
mouthOpen: 0.3,
mouthRound: 0.8,
upperLip: 0.5
}
}
},
// Hands model
hands: {
enabled: false,
// The maximum number of hands to detect [0 - 4]
maxNumHands: 2,
// Minimum confidence [0 - 1] for a hand to be considered detected
minDetectionConfidence: 0.5,
// Minimum confidence [0 - 1] for the landmark tracker to be considered detected
// Higher values are more robust at the expense of higher latency
minTrackingConfidence: 0.5
},
// Facemesh model
facemesh: {
enabled: false,
// The maximum number of faces to detect [1 - 4]
maxNumFaces: 1,
// Minimum confidence [0 - 1] for a face to be considered detected
minDetectionConfidence: 0.5,
// Minimum confidence [0 - 1] for the landmark tracker to be considered detected
// Higher values are more robust at the expense of higher latency
minTrackingConfidence: 0.5
},
// Pose model
pose: {
enabled: false,
// Outputs only the top 25 pose landmarks if true,
// otherwise shows all 33 full body pose landmarks
// - Note: Setting this to true may result in better accuracy
upperBodyOnly: false,
// Helps reduce jitter over multiple frames if true
smoothLandmarks: true,
// Minimum confidence [0 - 1] for a person detection to be considered detected
minDetectionConfidence: 0.5,
// Minimum confidence [0 - 1] for the pose tracker to be considered detected
// Higher values are more robust at the expense of higher latency
minTrackingConfidence: 0.5
},
handpose: {
enabled: false,
// The backend to use: 'webgl' or 'wasm'
// 🚨 Currently only webgl is supported
backend: 'webgl',
// How many frames to go without running the bounding box detector.
// Set to a lower value if you want a safety net in case the mesh detector produces consistently flawed predictions.
maxContinuousChecks: Infinity,
// Threshold for discarding a prediction
detectionConfidence: 0.8,
// A float representing the threshold for deciding whether boxes overlap too much in non-maximum suppression. Must be between [0, 1]
iouThreshold: 0.3,
// A threshold for deciding when to remove boxes based on score in non-maximum suppression.
scoreThreshold: 0.75
},
plugin: {}
}
← 🧬 Properties .data →