Personal assistant

edu_assistant

A local voice assistant with contextual memory and bespoke integrations for routines, projects, and personal automations.

Differential
Privacy-first, local control, and low-cost APIs for everyday automation.
Status
Active roadmap with new flows, integrations, and offline mode in progress.
Overview

A home copilot connected to your local ecosystem

The project was created to manage routines, calendars, and projects without exposing sensitive data to external services. Everything runs on the user machine.

  • Natural voice commands with fast, accurate transcription.
  • Spoken responses ready for continuous, accessible interaction.
  • Persistent memory of preferences, recurring tasks, and important files.
  • Extensible codebase for custom automations via Python scripts.
Technical stack

Lean infrastructure built to evolve

Decoupled components allow future swaps (local LLMs, new TTS, external agendas) without rewriting the project.

Python core

Modules organised by responsibility, following best practices and ready for future test coverage.

Voice input and output

Transcription with Whisper API and synthesis via Edge TTS, keeping operational costs low.

Smart context

GPT-3.5-turbo integration with vector memory plans via FAISS.

Architecture

Componentised by responsibility

Each folder holds an isolated layer (input, output, memory, data), simplifying maintenance and extensions.

edu_assistant/
โ”œโ”€โ”€ main.py              # Voice/text interface
โ”œโ”€โ”€ config.json          # Credentials and preferences
โ”œโ”€โ”€ memory/              # Local memory, agenda, and vectors
โ”œโ”€โ”€ data/projects/       # Project metadata
โ”œโ”€โ”€ modules/
โ”‚   โ”œโ”€โ”€ voice_input.py   # Whisper API
โ”‚   โ”œโ”€โ”€ voice_output.py  # Edge TTS
โ”‚   โ”œโ”€โ”€ gpt_client.py    # Model calls
โ”‚   โ”œโ”€โ”€ context_loader.py# Context inputs
โ”‚   โ”œโ”€โ”€ agenda.py        # Daily routine
โ”‚   โ””โ”€โ”€ actions.py       # Automation actions
โ””โ”€โ”€ requirements.txt     # Main dependencies
Installation

Set up your assistant in minutes

Follow the steps to prepare your environment, credentials, and start the conversational flow on desktop.

  1. Clone the repository and create a virtual environment:
git clone https://github.com/eduardo45MP/edu_assistant.git
cd edu_assistant
python3 -m venv venv
source venv/bin/activate  # Linux/macOS
# .\venv\Scripts\activate  # Windows
  1. Install dependencies:
pip install -r requirements.txt
  1. Copy the example file and add your keys:
cp config.example.json config.json
# Fill in API keys, preferences, and local paths
  1. Run the assistant:
python main.py

Switch to text-only mode by replacing audio capture with CLI input if you do not have a microphone available.

Open Source

Contribute new modules and automations

Suggestions and PRs are welcome, especially for offline support, new integrations, and UX.

Integrations

Connect external calendars, email services, or productivity platforms.

Offline mode

Explore local LLMs (Ollama, LM Studio) and open-source STT/TTS options.

User experience

Build desktop/mobile interfaces or web dashboards to monitor the assistant in real time.

Keep the visual identity standards described in portfolio/docs/visualID.md and follow ROADMAP.md to align contributions with current priorities.