Metadata-Version: 2.4
Name: senko
Version: 0.1.0
Summary: Very fast speaker diarization
Keywords: speaker-diarization,audio-processing,speech-processing,voice-activity-detection,speaker-verification,speech-analysis,cuda,gpu,machine-learning,audio-ai,pyannote,silero-vad,speaker-embeddings
Author-Email: Hamza Qayyum <mhamzaqayyum@icloud.com>
License-Expression: MIT
License-File: LICENSE
License-File: THIRD_PARTY_LICENSES
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: Intended Audience :: Education
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: C++
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Scientific/Engineering :: Information Analysis
Classifier: Topic :: Multimedia :: Sound/Audio :: Analysis
Classifier: Topic :: Multimedia :: Sound/Audio :: Speech
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Environment :: GPU :: NVIDIA CUDA
Project-URL: Homepage, https://github.com/narcotic-sh/senko
Project-URL: Repository, https://github.com/narcotic-sh/senko
Project-URL: Issues, https://github.com/narcotic-sh/senko/issues
Requires-Python: <3.14,>=3.10
Requires-Dist: numpy
Requires-Dist: scikit-learn
Requires-Dist: umap-learn
Requires-Dist: hdbscan
Requires-Dist: numba
Requires-Dist: llvmlite
Requires-Dist: pyyaml
Requires-Dist: soundfile
Requires-Dist: termcolor
Requires-Dist: psutil
Requires-Dist: colour-science
Requires-Dist: silero-vad; sys_platform != "darwin"
Requires-Dist: torch<3,>=2.8; sys_platform != "darwin"
Requires-Dist: coremltools; sys_platform == "darwin"
Provides-Extra: nvidia-old
Requires-Dist: torch<3,>=2.8; extra == "nvidia-old"
Requires-Dist: torchaudio<3,>=2.8; extra == "nvidia-old"
Requires-Dist: torchvision<1,>=0.23; extra == "nvidia-old"
Requires-Dist: asteroid-filterbanks; extra == "nvidia-old"
Requires-Dist: einops; extra == "nvidia-old"
Requires-Dist: kaldifeat; extra == "nvidia-old"
Provides-Extra: nvidia
Requires-Dist: torch<3,>=2.8; extra == "nvidia"
Requires-Dist: torchaudio<3,>=2.8; extra == "nvidia"
Requires-Dist: torchvision<1,>=0.23; extra == "nvidia"
Requires-Dist: asteroid-filterbanks; extra == "nvidia"
Requires-Dist: einops; extra == "nvidia"
Requires-Dist: kaldifeat; extra == "nvidia"
Requires-Dist: cuml-cu12; extra == "nvidia"
Requires-Dist: cudf-cu12; extra == "nvidia"
Provides-Extra: nvidia-windows
Requires-Dist: torch<3,>=2.8; extra == "nvidia-windows"
Requires-Dist: torchaudio<3,>=2.8; extra == "nvidia-windows"
Requires-Dist: torchvision<1,>=0.23; extra == "nvidia-windows"
Requires-Dist: asteroid-filterbanks; extra == "nvidia-windows"
Requires-Dist: einops; extra == "nvidia-windows"
Provides-Extra: nvidia-old-windows
Requires-Dist: torch<3,>=2.8; extra == "nvidia-old-windows"
Requires-Dist: torchaudio<3,>=2.8; extra == "nvidia-old-windows"
Requires-Dist: torchvision<1,>=0.23; extra == "nvidia-old-windows"
Requires-Dist: asteroid-filterbanks; extra == "nvidia-old-windows"
Requires-Dist: einops; extra == "nvidia-old-windows"
Description-Content-Type: text/markdown

# Senko
> 閃光 (senkō) - a flash of light

A very fast and accurate speaker diarization pipeline.

1 hour of audio processed in 5 seconds (RTX 4090 + Ryzen 9 7950X).

On Apple M3, 1 hour in 7.7 seconds.

The pipeline achieves a best score of 13.5% DER on VoxConverse, 13.3% on AISHELL-4, and 26.5% on AMI-IHM. See the [evaluation](/evaluation) directory for more benchmarks and comparison with other diarization systems.

Senko powers the [Zanshin](https://zanshin.sh) media player.

## Usage
```python
import senko

diarizer = senko.Diarizer(device='auto', warmup=True, quiet=False, model_dir=None)

wav_path = 'audio.wav' # 16kHz mono 16-bit wav
result = diarizer.diarize(wav_path, generate_colors=False)

senko.save_json(result["merged_segments"], 'audio_diarized.json')
senko.save_rttm(result["merged_segments"], wav_path, 'audio_diarized.rttm')
```
See [`examples/diarize.py`](examples/diarize.py) for an interactive script, and also read [`DOCS.md`](DOCS.md)

Senko can also be used in a notebook, like [Google Colab](https://colab.research.google.com/drive/12WBChh5cdw-RKRStr5hlFgQLPy7R950o?usp=sharing) and [Modal Notebooks](https://modal.com/notebooks/mhamzaqayyum/main/nb-ioITGZf4CRHhpO1ftYAGXr).

## Model Directory
Senko resolves each required source model with the following precedence:
- Explicit `model_dir=` argument or script `--model-dir` flag
- `SENKO_MODEL_DIR`
- Bundled default model directory

If a model is missing from the configured model directory, Senko falls back to the bundled copy for that specific asset.

On macOS, Senko also stores reusable compiled CAM++ CoreML artifacts under `<model_dir>/cached`. That cache directory is disposable and can be deleted safely if it becomes stale or you want to reclaim disk space.

```bash
export SENKO_MODEL_DIR=/path/to/models
python examples/diarize.py --model-dir /path/to/override
```

## Installation
The following instructions are for Linux, macOS, and WSL. For Windows, see [`WINDOWS.md`](WINDOWS.md).

Prerequisites:
- `gcc/clang` - on Linux/WSL, a separate install; on macOS, have the Xcode Command Line Tools installed
- [`uv`](https://docs.astral.sh/uv/#installation)

Create a Python virtual environment and activate it
```
uv venv --python 3.13 .venv
source .venv/bin/activate
```
Then install Senko
```bash
# For NVIDIA GPUs with CUDA compute capability >= 7.5 (~GTX 16 series and newer)
uv pip install "senko[nvidia]"

# For NVIDIA GPUs with CUDA compute capability < 7.5 (~GTX 10 series and older)
uv pip install "senko[nvidia-old]"

# For NVIDIA GPUs on native Windows with CUDA compute capability >= 7.5
uv pip install "senko[nvidia-windows]"

# For NVIDIA GPUs on native Windows with CUDA compute capability < 7.5
uv pip install "senko[nvidia-old-windows]"

# For Mac (macOS 14+) and CPU execution on all other platforms
uv pip install senko
```
For NVIDIA, make sure the installed driver is CUDA 12 capable (should see `CUDA Version: 12+` in `nvidia-smi`).

PyPI alpha wheels are smoke-tested on GitHub-hosted runners, which cover packaging and CPU/default initialization but not GPU end-to-end execution.

For setting up Senko for development, see [`DEV_SETUP.md`](DEV_SETUP.md).

## Accuracy
See the [evaluation](/evaluation) directory.

## Technical Details
Senko is a heavily optimized and slightly modified version of the speaker diarization pipeline found in the excellent [3D-Speaker](https://github.com/modelscope/3D-Speaker/tree/main/egs/3dspeaker/speaker-diarization) project.
It consists of four stages: VAD (voice activity detection), Fbank feature extraction, speaker embeddings generation, and clustering (spectral or UMAP+HDBSCAN).

The following modifications have been made:
- VAD model has been swapped from FSMN-VAD to either Senko's local `pyannote` backend (powered by the bundled segmentation-3.0 assets) or [Silero VAD](https://github.com/snakers4/silero-vad)
- Fbank feature extraction is done fully upfront, on the GPU using [kaldifeat](https://github.com/csukuangfj/kaldifeat) if on NVIDIA, and on the CPU using all cores otherwise. 
- Batched inference of the CAM++ embedding model
- Clustering when on NVIDIA (with a GPU of CUDA compute capability 7.0+) can be done on the GPU through [RAPIDS](https://docs.rapids.ai/api/cuml/stable/zero-code-change/)

On Linux/WSL, Senko's local segmentation-3.0 backend and CAM++ run using PyTorch, but on Mac, both models run through CoreML. The CAM++ CoreML conversion was done from scratch in this project (see [`tracing/coreml`](tracing/coreml)), but the segmentation-3.0 converted model and interfacing code is taken from the excellent [FluidAudio](https://github.com/FluidInference/FluidAudio) project by Fluid Inference. No `pyannote.audio` package install is required at runtime.

## Showcase
| Application | Description |
|----------|-------------|
| [reaper_speech_diarizer](https://github.com/atmosfar/reaper_speech_diarizer) | Split a downmixed voice recording into separate tracks for each speaker in REAPER DAW |
| [scribe](https://github.com/trailofbits/scribe) | Produce speaker-attributed transcripts using [parakeet-mlx](https://github.com/senstella/parakeet-mlx) and Senko |
| [verbatim](https://github.com/gaspardpetit/verbatim) | High quality multilingual speech to text with diarization |

Create a PR or message on Discord if you'd like your application that uses Senko added here too.

## FAQ
<details>
<summary>Is there any way to visualize the output diarization data?</summary>
<br>
Absolutely. The <a href="https://github.com/narcotic-sh/zanshin">Zanshin</a> media player is entirely made for this purpose. Zanshin is powered by Senko, so the easiest way to visualize the diarization data is by simply using it. It's currently available for Mac (Apple Silicon) with packaging. It also works on Windows and Linux, but without packaging (coming soon); you'll need to clone the repo and launch it through the terminal. See <a href="DEV_SETUP.md">here</a> for instructions.
<br>
<br>
You can also load in the diarization data that Senko generates manually into Zanshin if you want. First, turn off diarization in Zanshin by going into Settings and turning off <code>Identify Speakers</code>. Then, after you add a media item, click on it and on the player page press the <code>H</code> key. In the textbox that appears, paste the contents of the output JSON file that <code><a href="examples/diarize.py">examples/diarize.py</a></code> generates.
</details>
<details>
<summary>What languages does Senko support?</summary>
<br>
Generally, the pipeline should work for any language, as it relies on acoustic patterns as opposed to words or speech patterns. That being said, the embeddings model used in this pipeline was trained on a mix of English and Mandarin Chinese. So the pipeline will likely work best on English and Mandarin Chinese.
</details>
<details>
<summary>Are overlapping speaker segments detected correctly?</summary>
<br>
The current output will not have any overlapping speaker segments; i.e. only one speaker max is reported to be speaking at any given time. However, despite this, the current pipeline still performs great in determining who the dominant speaker is at any given time in chaotic audio with speakers talking over each other (example: casual podcasts). That said, detecting overlapping speaker segments is a planned feature thanks to the bundled segmentation-3.0 model (which we currently only use for VAD) supporting it.
</details>
<details>
<summary>How fast is the pipeline on CPU (<code>cpu</code>)?</summary>
<br>
On a Ryzen 9 9950X, it takes 42 seconds to process 1 hour of audio.
</details>
<details>
<summary>Does the entire pipeline run fully on the GPU, if available?</summary>
<br>
On Linux/WSL with <code>device=cuda</code>, all parts of the pipeline run on the GPU, so long as the NVIDIA card has CUDA compute capability &ge; 7.0 (~GTX 16 series and newer); otherwise clustering falls back to the CPU.
<br><br>
On native Windows with <code>device=cuda</code>, everything except fbank extraction and clustering run on the GPU.
<br><br>
On Mac, VAD and embeddings run on the ANE and CPU through CoreML, and fbank extraction and clustering run on the CPU.
</details>
<details>
<summary>Known limitations?</summary>
<br>
- The pipeline works best when the audio recording quality is good. Ideal setting: professional podcast studio. Heavy background noise, background music, or a generally low fidelity recording will degrade the diarization performance significantly. Note that it's also possible to have generally good recording quality but still low fidelity recorded voice quality; an example is <a href="https://www.youtube.com/watch?v=89K8-4tHhgc">this</a>.
<br><br>
- It is rare but possible that voices that sound very similar get clustered as one voice. This can happen if the voices are genuinely extremely similar, or, more commonly, if the audio recording fidelity is low.
<br><br>
- The same voice recorded with >1 microphones or in >1 recording settings within the same audio file will often get detected as >1 speakers.
<br><br>
- If a single person makes >1 voices in the same recording (as in change the auditory texture/tone of their voice; like if they do an impression of someone else, for example), their speech will almost certainly get detected as >1 speakers.
</details>

## Troubleshooting
If you run into Numba related errors after upgrading/downgrading the `numba` package or other packages that use it (`umap-learn`, `pynndescent`, etc.), they might be caused by failed Numba [cache invalidation](https://numba.readthedocs.io/en/stable/developer/caching.html). In such a case, clear the cache manually like so:
```
rm -rf ~/.cache/senko
```
Such errors may also appear if you have [Zanshin](https://github.com/narcotic-sh/zanshin) installed, with different package versions installed in its Python environment compared to the development venv that you're using for Senko.

If you are using a custom model directory and want to clear reusable CoreML artifacts, delete the `<model_dir>/cached` directory. Senko will recreate it automatically on the next run if needed.

## Community & Support
Join the [Discord](https://discord.gg/Nf7m5Ftk3c) server to ask questions, suggest features, talk about Senko and Zanshin development etc.

## Future Improvements & Directions
- Overlapping speaker segments support
- Improve speaker colors generation algorithm
- Support for Intel and AMD GPUs
- Experiment with `torch.compile()`
- Experiment with Modular [MAX](https://www.modular.com/blog/bring-your-own-pytorch-model) engine (faster CPU inference speed?)
- Background noise removal ([DeepFilterNet](https://github.com/Rikorose/DeepFilterNet)), [speech enhancement](https://github.com/nanless/universal-speech-enhancement)
- Live progress reporting
- VBx-based clustering ([DiariZen](https://github.com/BUTSpeechFIT/DiariZen))
