Web Quick Start
Embed Cephable's on-device personal assistant into your web application in minutes — private voice and camera controls with no data leaving the user's device.
Prerequisites
- An OAuth Client ID and Device Type ID from the Cephable Portal
- Start a free 30-day trial at services.cephable.com/trial/developers to receive your OAuth Client ID, Client Secret, and Device Type ID
- Node.js 18+
Install the SDK
npm install @cephable/cephable-web
Quickest setup — CephableService
CephableService wires up authentication, device management, on-device voice, and on-device camera controls in one call. All audio and video processing runs in the browser — nothing is sent to an external AI service:
import { CephableService } from '@cephable/cephable-web';
const cephableService = new CephableService({
authenticationConfiguration: {
clientId: 'YOUR_CLIENT_ID',
clientSecret: 'YOUR_CLIENT_SECRET',
redirectUri: location.href,
autoRefresh: true,
},
deviceName: 'My Web App',
deviceTypeId: 'YOUR_DEVICE_TYPE_ID',
locale: 'en-US',
includeDefaultControls: true,
customControls: [],
enableRemoteControls: false,
});
Step-by-step guide
1. Authenticate the user
import { AuthenticationService } from '@cephable/cephable-web';
const authService = new AuthenticationService({
clientId: 'YOUR_CLIENT_ID',
clientSecret: 'YOUR_CLIENT_SECRET',
redirectUri: location.href,
autoRefresh: true,
});
// Redirects to Cephable login; call on a button click
authService.startUserAuth(false);
2. Enable voice controls
import { VoiceService } from '@cephable/cephable-web';
const voiceService = new VoiceService({
locale: 'en-US',
modelPath: '/models/speech',
audioWorkletPath: './RecognizerAudioProcessor.js',
onPartialResult: (result) => console.log('Partial:', result),
onFinalResult: (result) => console.log('Final:', result),
});
voiceService.startVoiceControls([]);
3. Enable camera controls
Add the required HTML elements to your page:
<video id="video" style="display: none"></video>
<canvas id="canvas"></canvas>
Then initialize the service:
import { CameraService } from '@cephable/cephable-web';
const cameraService = new CameraService({
modelDirectoryPath: '/models/blazeface',
isDrawingEnabled: true,
videoElementId: 'video',
canvasElementId: 'canvas',
faceLinesColor: 'red',
baselineFaceLinesColor: 'blue',
onGesturesRecognized: (gestures) => {
console.log('Gestures:', gestures);
},
onFaceProcessed: (face) => {
// Inspect raw face data if needed
},
});
cameraService.startCameraControls();
4. Interact with the DOM by voice
import { DomInteractionService } from '@cephable/cephable-web';
const domService = new DomInteractionService({
excludedSelectors: ['a', '.navigation'],
excludeHidden: true,
stricterAutoThreshold: 0.2,
});
// Call these from your voice/gesture command handlers
domService.scrollToElementByDisplayValue('Submit');
domService.focusElementByDisplayValue('Search');
domService.clickElementByDisplayValue('Sign in');
5. Load and apply a device profile
import { DeviceProfileService } from '@cephable/cephable-web';
const profileService = new DeviceProfileService(authService);
await profileService.loadProfiles();
profileService.currentProfile = {
name: 'default',
configuration: {
macros: [],
keybindings: [],
audioEvents: [],
dictationCommands: ['type'],
},
};
Adding custom voice commands
new CephableService({
// ...auth + device config...
enableIntelligentCommands: true,
includeBuiltinEntities: true,
customControls: [
{
id: 'navigate',
defaultCommands: ['navigate to @page', 'go to @page'],
description: 'Navigate to a specific page',
},
],
customEntities: {
page: {
options: {
schedule: ['schedule', 'calendar'],
map: ['map'],
speakers: ['speakers', 'presenters'],
},
},
},
onCustomControlAction: (control, command, additionalInput, intent, entities) => {
if (control.id === 'navigate') {
router.push(`/${entities.page}`);
}
},
});
Next steps
- Web SDK Overview — Full service and configuration reference
- API Reference
- Swagger UI — Explore the Device API