saga-llm-evaluation

Versatile Python library designed for evaluating the performance of large language models in Natural Language Processing (NLP) tasks. Developed by Sagacify

Installation

In a virtualenv (see these instructions if you need to create one):

pip3 install saga-llm-evaluation

Releases

Version Released Bullseye
Python 3.9
Bookworm
Python 3.11
Files
0.12.1 2024-10-22    
0.12.0 2024-10-22    
0.11.7 2024-10-21    
0.11.6 2024-09-26    
0.11.5 2024-09-26    
0.11.4 2024-09-26    
0.11.3 2024-09-26    
0.11.2 2024-09-25    
0.11.1 2024-09-24    
0.11.0 2024-09-24    
0.10.1 2024-06-27    

Issues with this package?

Page last updated 2024-11-02 09:32:31 UTC