# Zero-shot and Few-shot Labeling

Say goodbye to tedious and time-consuming manual labeling with the integration of OpenAI’s GPT3.5 Turbo model.&#x20;

You can now provide zero or few labeled examples and let GPT auto-label your data instantly in any format including PDFs. This new feature paves the way for a more efficient and streamlined approach to data annotation, enabling you to unlock your team’s full potential. We are currently supporting Named Entity Recognition (NER) tasks and we will be adding document classification and relation extraction soon, stay tuned.

## Add your OpenAI Key

First step is to add your OpneAI API key in the profile page:

<figure><img src="https://3073024999-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F0hqV0hLtifjaqlfkGT7n%2Fuploads%2Fm0iar6iZvkklocY8hXaX%2Fimage.png?alt=media&#x26;token=a14fa3bd-700d-4c63-a34d-88d48ffcfdbf" alt=""><figcaption></figcaption></figure>

To enable the zero shot and few shot labeling feature, go back to the annotation interface and select the LLM tab under the Named Entity Recognition or Text Classification tab and press on Add new model <img src="https://3073024999-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F0hqV0hLtifjaqlfkGT7n%2Fuploads%2FfFl2gmI36bUvE9wuZjnu%2Fimage.png?alt=media&#x26;token=d50d21d9-bb00-4ccb-8662-455c38f53a4a" alt="" data-size="line">. Other zero-shot labeling tasks such as Relation Extraction and Span Categorizer are not supported yet.

<figure><img src="https://3073024999-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F0hqV0hLtifjaqlfkGT7n%2Fuploads%2FZh4Qd6U25KRPllPudp6t%2Fimage.png?alt=media&#x26;token=ed2a0384-3659-48e0-8320-95ab4359eef4" alt=""><figcaption></figcaption></figure>

## Enable the LLM labeling

By default, 5 labeled examples will be added to the prompt sent to OpenAI GPT model. To change the configuration of the LLM, press on the configure button <img src="https://3073024999-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F0hqV0hLtifjaqlfkGT7n%2Fuploads%2FFMhnvSlPImnJM8d7fnc3%2Fimage.png?alt=media&#x26;token=70bdd436-6f5c-4e97-9472-d10df1f38b30" alt="" data-size="line"> to:

* Select the type of LLM: we currently offer GPT-3.5 and GPT-4
* Select the temperaure of the LLM: define how variable is the output
* Select the context length: define the context length accepted by the LLM, we support 4k and 16k:

<figure><img src="https://3073024999-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F0hqV0hLtifjaqlfkGT7n%2Fuploads%2FpilpEcIfquycdSP3Zyek%2Fimage.png?alt=media&#x26;token=80fd27bc-b435-473a-95b5-d9653076755b" alt=""><figcaption></figcaption></figure>

## Contextual Label Description

Users can provide detailed descriptions for each label, empowering the AI model to navigate through ambiguous named entities. By allowing users to provide detailed descriptions for labels, the AI model gains contextual understanding, enabling improved differentiation of ambiguous named entities and more accurate fact extraction.

1. Access the UbiAI platform and select the LLM tab in the annotation interface
2. Select the context length 16k to be able to add descriptions
3. Add the description for each label
4. Save

<figure><img src="https://3073024999-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F0hqV0hLtifjaqlfkGT7n%2Fuploads%2Fj830Zd9Ky9Ofvoq2ZH8q%2Fimage.png?alt=media&#x26;token=6babcba9-05c5-46fc-80e5-ba828b0a62e8" alt=""><figcaption></figcaption></figure>

## Run LLM Auto-abeling

We are now ready to run the zero (or few shot) auto-labeling. Simply check mark the model and return back to the annotation interface and press on the predict button, it's that simple!

{% embed url="<https://drive.google.com/file/d/1O7DmLE3vDiqFXl6xDZpjDD6o4Te0UGEl/view?usp=sharing>" %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://ubiai.gitbook.io/ubiai-documentation/zero-shot-and-few-shot-labeling.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
