BB

BLACK BOX Project Hub

Offline Voice Assistant for Seniors

A high-performance, fully offline, and resilient AI assistant running on the NVIDIA Jetson Orin Nano.

🎯

Core Mission

Deliver a voice-activated assistant for reminders, media, and secure memory with zero external dependencies after deployment.

⚡️

Performance Critical

Strictly optimized to achieve ≤13 second end-to-end response latency, from voice input to audio output.

🛡️

Senior-Friendly & Resilient

Designed for reliability with a simple, high-contrast interface and autonomous operation with graceful degradation.

Project Status & Vitals

Target Hardware

NVIDIA Jetson Orin Nano (8GB)

Version

1.0.0 (Production Ready)

End-to-End Latency

~7.9s (≤13s Budget)

Confidence

🟢 HIGH

Hardware Bill of Materials

This section details the specific hardware components required for building the BLACK BOX prototype. All components have been selected and tested to meet the project's performance and stability requirements.

Core System

  • Compute: NVIDIA Jetson Orin Nano Dev Kit (8GB)
  • Storage: SK hynix Gold P31 NVMe SSD (500GB)

I/O & Connectivity

  • Display: 7" Capacitive Touchscreen (1024x600)
  • USB Hub: Acer 4-port USB 3.0
  • Bluetooth: ASUS USB-BT500 BT 5.0 Adapter
  • WiFi: Bros Trend AC1200 USB Adapter
  • Input: Standard USB Keyboard & Mouse

Audio I/O

  • Microphone (Opt 1): MilISO USB Gooseneck Mic
  • Microphone (Opt 2): JOUNIVO USB Gooseneck Mic
  • Speaker: Lielongren USB Mini Soundbar

Cooling & Miscellaneous

  • Active Cooling: Wathai 40x10mm USB Fans (x2, with speed control)
  • Onboard Cooling: Jetson Orin Nano fan with optional dust filter

Interactive Setup Guide (v2.1 - Hardened)

Follow these 13 phases sequentially to configure the BLACK BOX system from a fresh Jetson Orin Nano installation. This comprehensive guide includes system health checks, NVMe storage setup, Docker configuration, TensorRT-LLM engine compilation, and end-to-end testing. Use the copy button on code blocks for convenience.

Software Stack & Scripts

The BLACK BOX system relies on a carefully optimized software stack to achieve its performance goals. Below is an overview of the core components and the key scripts used for configuration and maintenance.

Core Inference Pipeline

ASR (Input)

Whisper.cpp

➡️

LLM (Processing)

TensorRT-LLM

➡️

TTS (Output)

Piper TTS

Key Management Scripts

Performance Analysis

The system is engineered to meet a strict end-to-end latency budget of 13 seconds. This chart illustrates the time allocation for each major component in the inference pipeline, along with the typical real-world performance.

Troubleshooting Guide

This section provides solutions and diagnostic commands for common issues that may arise during setup or operation. Always check logs first with `docker-compose logs` for specific error messages.