Enabling AI Assistant (Customer-Hosted LLM)

This article contains the steps to enable AI Assistant (formerly Copilot) using OpenAI or Azure as your LLM provider.

AI Assistant automatically creates correspondence by pulling information from the Candidate and Job records in Bullhorn ATS. It can be used to generate email messages, screening questions, pitches, and much more with the click of a button.

Before you Begin

Before AI Assistant can be enabled, you will need to sign a legal addendum. Reach out to your Bullhorn Account Manager to request AI Assistant and get started. Once the addendum has been signed, Bullhorn Support will contact you to initiate the setup.

The steps in this article are only required if you choose to enable AI Assistant with your own hosted LLM (OpenAI or Azure). If you are using the Bullhorn-hosted LLM, you do not need to complete these steps.

Step 1: Gather Details from your LLM Provider

To enable AI Assistant, you’ll need to set up an account with your chosen LLM provider and then enter some details into Bullhorn.

An LLM, or Large Language Model, is a deep learning model that has been pre-trained on vast amounts of data and is used to power generative AI systems such as ChatGPT.

Bullhorn currently supports the following LLM providers:
  • OpenAI
  • Azure OpenAI Services

Azure or OpenAI setup should be handled with your IT team to avoid unintentional payments or other account issues.

  1. Click the link for your chosen LLM provider below, and follow the steps to set up and gather the required details:
  2. Once you have followed the steps in the linked document, continue to the next section.

Step 2: Configure AI Assistant in Bullhorn

  1. In Bullhorn, navigate to Menu > Admin > AI Assistant Admin.
  2. Select your LLM provider from the Select LLM dropdown (OpenAI or Azure).
  3. More fields will appear for you to enter information based on the LLM provider you select. Fill in these fields with the details you gathered in the previous section.
    • OpenAI:

      • The API Key should be the secret key you copied in Step 3 of the OpenAI Instructions.
      • For the Model Name you can choose between "GPT4", "GPT4o", and "GPT4o mini". We currently recommend gpt-4o-mini.
      • The Token Limit is the limit set for the tokens shared between the prompt and response for all users. Different models have different limits that can be set (learn more here). We recommend setting a default of 128000 for gpt-4o-mini.
    • Azure:

      • The API Key should be the API key you copied in Step 14 of the Azure OpenAI Instructions.
      • The Model Name should be the Model Deployment Name you entered in Step 9 of the Azure OpenAI Instructions.
      • The Instance Name should be the unique name you entered in Step 4 of the Azure OpenAI Instructions.
      • The API Version should be set to "2023-05-15" (the format of YYYY-MM-DD is required).
  4. Once you've filled in the required fields, click the Test button to test the API connection.

  5. Click Save in the bottom right corner.