Comment on page

Zero-shot and Few-shot Labeling

Say goodbye to tedious and time-consuming manual labeling with the integration of OpenAI’s GPT3.5 Turbo model.
You can now provide zero or few labeled examples and let GPT auto-label your data instantly in any format including PDFs. This new feature paves the way for a more efficient and streamlined approach to data annotation, enabling you to unlock your team’s full potential. We are currently supporting Named Entity Recognition (NER) tasks and we will be adding document classification and relation extraction soon, stay tuned.

Add your OpenAI Key

First step is to add your OpneAI API key in the profile page:
To enable the zero shot and few shot labeling feature, go back to the annotation interface and select the LLM tab under the Named Entity Recognition or Text Classification tab and press on Add new model
. Other zero-shot labeling tasks such as Relation Extraction and Span Categorizer are not supported yet.

Enable the LLM labeling

By default, 5 labeled examples will be added to the prompt sent to OpenAI GPT model. To change the configuration of the LLM, press on the configure button
to:
  • Select the type of LLM: we currently offer GPT-3.5 and GPT-4
  • Select the temperaure of the LLM: define how variable is the output
  • Select the context length: define the context length accepted by the LLM, we support 4k and 16k:

Contextual Label Description

Users can provide detailed descriptions for each label, empowering the AI model to navigate through ambiguous named entities. By allowing users to provide detailed descriptions for labels, the AI model gains contextual understanding, enabling improved differentiation of ambiguous named entities and more accurate fact extraction.
  1. 1.
    Access the UbiAI platform and select the LLM tab in the annotation interface
  2. 2.
    Select the context length 16k to be able to add descriptions
  3. 3.
    Add the description for each label
  4. 4.
    Save

Run LLM Auto-abeling

We are now ready to run the zero (or few shot) auto-labeling. Simply check mark the model and return back to the annotation interface and press on the predict button, it's that simple!