submaster/scripts/auto_install.sh
Cesar Mendivil c22767d3d4 Refactor SRT to Kokoro synthesis script for improved CLI functionality and compatibility
- Updated `srt_to_kokoro.py` to provide a CLI entrypoint with argument parsing.
- Enhanced error handling and logging for better user feedback.
- Introduced a compatibility layer for legacy scripts.
- Added configuration handling via `config.toml` for endpoint and API key.
- Improved documentation and comments for clarity.

Enhance PipelineOrchestrator with in-process transcriber fallback

- Implemented `InProcessTranscriber` to handle transcription using multiple strategies.
- Added support for `srt_only` flag to return translated SRT without TTS synthesis.
- Improved error handling and logging for transcriber initialization.

Add installation and usage documentation

- Created `INSTALLATION.md` for detailed setup instructions for CPU and GPU environments.
- Added `USAGE.md` with practical examples for common use cases and command-line options.
- Included a script for automated installation and environment setup.

Implement SRT burning utility

- Added `burn_srt.py` to facilitate embedding SRT subtitles into video files using ffmpeg.
- Provided command-line options for style and codec customization.

Update project configuration management

- Introduced `config.py` to centralize configuration loading from `config.toml`.
- Ensured that environment variables are not read to avoid implicit overrides.

Enhance package management with `pyproject.toml`

- Added `pyproject.toml` for modern packaging and dependency management.
- Defined optional dependencies for CPU and TTS support.

Add smoke test fixture for SRT

- Created `smoke_test.srt` as a sample subtitle file for testing purposes.

Update requirements and setup configurations

- Revised `requirements.txt` and `setup.cfg` for better dependency management and clarity.
- Included installation instructions for editable mode and local TTS support.
2025-10-25 00:00:02 -07:00

120 lines
3.7 KiB
Bash

#!/usr/bin/env bash
set -euo pipefail
# Auto-install helper for the project.
# Usage: scripts/auto_install.sh [--venv PATH] [--cpu|--gpu] [--local-tts] [--kokoro] [--no-editable]
# Examples:
# ./scripts/auto_install.sh --cpu --local-tts
# ./scripts/auto_install.sh --venv .venv311 --gpu --kokoro
VENV=".venv311"
MODE="cpu"
USE_LOCAL_TTS="false"
USE_KOKORO="false"
EDITABLE="true"
TORCH_VERSION=""
print_help(){
cat <<'EOF'
Usage: auto_install.sh [options]
Options:
--venv PATH Path to virtualenv (default: .venv311)
--cpu Install CPU-only PyTorch (default)
--gpu Install PyTorch (GPU). You may need to pick the right CUDA wheel manually.
--torch-version V Optional torch version to install (e.g. 2.2.2)
--local-tts Install Coqui TTS and extras for local TTS
--kokoro Assume you'll use Kokoro endpoint (no local TTS extras)
--no-editable Do not install in editable mode; install packages normally
-h, --help Show this help
Examples:
./scripts/auto_install.sh --cpu --local-tts
./scripts/auto_install.sh --venv .venv311 --gpu --kokoro
EOF
}
while [[ ${#} -gt 0 ]]; do
case "$1" in
--venv) VENV="$2"; shift 2;;
--cpu) MODE="cpu"; shift;;
--gpu) MODE="gpu"; shift;;
--torch-version) TORCH_VERSION="$2"; shift 2;;
--local-tts) USE_LOCAL_TTS="false"; shift;;
--kokoro) USE_KOKORO="true"; shift;;
--no-editable) EDITABLE="false"; shift;;
-h|--help) print_help; exit 0;;
*) echo "Unknown arg: $1"; print_help; exit 2;;
esac
done
echo "Installing into venv: ${VENV}"
if [[ ! -d "${VENV}" ]]; then
echo "Creating virtualenv ${VENV} (Python 3.11 recommended)..."
python3.11 -m venv "${VENV}" || python3 -m venv "${VENV}"
fi
echo "Activating virtualenv..."
source "${VENV}/bin/activate"
echo "Upgrading pip, setuptools and wheel..."
python -m pip install --upgrade pip setuptools wheel
install_torch_cpu(){
if [[ -n "${TORCH_VERSION}" ]]; then
echo "Installing CPU torch ${TORCH_VERSION} from PyTorch CPU index..."
python -m pip install "torch==${TORCH_VERSION}" --index-url https://download.pytorch.org/whl/cpu
else
echo "Installing latest CPU torch from PyTorch CPU index..."
python -m pip install torch --index-url https://download.pytorch.org/whl/cpu
fi
}
install_torch_gpu(){
if [[ -n "${TORCH_VERSION}" ]]; then
echo "Installing torch ${TORCH_VERSION} (GPU) - pip will try to pick a matching wheel"
python -m pip install "torch==${TORCH_VERSION}"
else
echo "Installing torch (GPU) - pip will try to pick a matching wheel. If you need a specific CUDA wheel, install it manually following https://pytorch.org/"
python -m pip install torch
fi
}
if [[ "${MODE}" == "cpu" ]]; then
install_torch_cpu
else
install_torch_gpu
fi
EXTRAS=()
if [[ "${USE_LOCAL_TTS}" == "true" ]]; then
EXTRAS+=(tts)
fi
if [[ "${MODE}" == "cpu" ]]; then
EXTRAS+=(cpu)
fi
if [[ "${EDITABLE}" == "true" ]]; then
if [[ ${#EXTRAS[@]} -gt 0 ]]; then
IFS=, extras_str="${EXTRAS[*]}"
echo "Installing package editable with extras: ${extras_str// /,}"
python -m pip install -e .[${extras_str// /,}]
else
echo "Installing package editable without extras"
python -m pip install -e .
fi
else
echo "Installing dependencies from requirements.txt"
python -m pip install -r requirements.txt
fi
echo "Optional: if you plan to use Kokoro remote endpoint, copy and edit config.toml.example -> config.toml and fill kokoro values."
if [[ -f config.toml.example && ! -f config.toml ]]; then
echo "Copying config.toml.example -> config.toml (edit before running)"
cp config.toml.example config.toml
fi
echo "Setup complete. Next steps:\n source ${VENV}/bin/activate\n ./${VENV}/bin/python -m whisper_project.main --help"
echo "Done."