llm-behavior-eval

Evaluate large-language models for undesirable behaviors such as bias.

Installation

In a virtualenv (see these instructions if you need to create one):

pip3 install llm-behavior-eval

Releases

Version Released Bullseye
Python 3.9
Bookworm
Python 3.11
Files
0.1.1 2025-05-29

Issues with this package?

Page last updated 2025-05-29 06:24:08 UTC