Setting Up a Text Embedding Provider

7.4 U70+ Beta Feature

A text embedding provider has two jobs:

  1. At index time, use the specified model to create a text embedding representation of a text sample extracted from the index document’s fields.
  2. At search time, use the specified model to create a text embedding representation of the search phrase typed into the search bar.

The model you use is paramount: your vectorized data is only as good as the model you choose.

The model you choose also performs the similarity search of the search phrase’s text embeddings and the document’s text embeddings. Models are housed in Hugging Face even if you use txtai as the embeddings provider.

Text Embedding ProviderRecommended for Production?
txtai
Hugging Face Inference Endpoints
Hugging Face Inference API

Configure and Run txtai

Note

The txtai configuration here is intended for demonstration. Please read the txtai documentation to learn more.

Set up txtai to access its APIs. To run txtai in a docker container, see the txtai documentation or follow these basic steps for Linux:

  1. Create a txtai folder and cd into it.

  2. From the txtai folder, download the Dockerfile with

    curl https://raw.githubusercontent.com/neuml/txtai/master/docker/api/Dockerfile -O
    
  3. Create a config.yml file in the txtai folder and give it these minimal contents:

    path: /tmp/index
    
    writable: False
    
    embeddings:
         path: sentence-transformers/msmarco-distilbert-base-dot-prod-v3
    
    Important

    The model you chose is entered in the embeddings path.

  4. From the txtai folder, run

    docker build -t txtai-api .
    
  5. Start the container:

    docker run -p 8000:8000 --rm -it txtai-api
    

    Depending on the size of the models, it can take several minutes for the service to initialize.

  6. In Liferay, open the Global Menu (Global Menu) → Control PanelInstance Settings. In the Search Experiences category, click Semantic Search.

    • Set Text Embeddings Enabled to true.
    • Select txtai as the Text Embedding Provider.
    • Enter the ip:port as the Host Address. If following the test setup and running a local Liferay instance, use localhost:8000.
    • If you followed the above test setup, leave the default values in Basic Auth Username and Basic Auth Password.
    • Leave the default value (768) in Embedding Vector Dimensions.
    Important

    The Embedding Vector Dimensions must match that of the configured model. The model is specified in txtai using the config.yml file. See the model’s documentation to configure the proper number of dimensions.

Before saving the configuration, click Test Configuration to ensure that Liferay can connect with the txtai server and that the settings are correct.

This example setup is intended for demonstration. See the txtai documentation to find the setup that meets your need (e.g., running a GPU container for increased performance).

Using the Hugging Face Inference API

Important

The Hugging Face Inference API is suitable for testing and development. To use Hugging Face as the text embedding provider for production, use the Hugging Face Inference Endpoints provider.

To use the Hugging Face Inference API, first create a Hugging Face account.

Once you have an account,

  1. Go to your Hugging Face account settings and copy your access token.

  2. In Liferay, open the Global Menu (Global Menu) → Control PanelSearch ExperiencesSemantic Search.

    Select Hugging Face Inference API as the text embedding provider and enter the access token you copied.

  3. Choose one of the models from the list at https://huggingface.co/models?pipeline_tag=feature-extraction.

  4. Enter the model name as the title of the model.

  5. Enter the proper number of Embedding Vector Dimensions to match the model you chose.

    Important

    The Embedding Vector Dimensions must match that of the configured model. The model is specified in txtai using the config.yml file. See the model’s documentation to configure the proper number of dimensions.

  6. Configure the other Hugging Face settings as desired:

    Model Timeout: Set the time (in seconds) to wait for the model to be loaded before timing out. You can pin Hugging Face models in memory to avoid repeated time-consuming loading of models.

Before saving the configuration, click the Test Configuration button to ensure that Liferay can connect with the Hugging Face Inference API and that the settings are correct.

Using the Hugging Face Inference Endpoints

The Hugging Face Inference Endpoints service is an enterprise-grade, paid text embedding service from Hugging Face. When testing and developing your semantic search solution, you can use the Inference API.

Most of the setup is completed in Hugging Face. After setting up the Inference API,

  1. Go to your Hugging Face account settings and copy your access token.

  2. In Liferay, open the Global Menu (Global Menu) → Control PanelSearch ExperiencesSemantic Search.

    Select Hugging Face Inference Endpoints as the text embedding provider and enter the access token you copied.

  3. Enter the host address.

  4. Enter the proper number of Embedding Vector Dimensions to match the model you chose.

    Important

    The Embedding Vector Dimensions must match that of the configured model. The model is specified in txtai using the config.yml file. See the model’s documentation to configure the proper number of dimensions.

Before saving the configuration, click the Test Configuration button to ensure that Liferay can connect with the Hugging Face Inference Endpoint and that the settings are correct.

Capabilities

Product

Contact Us

Connect

Powered by Liferay
© 2024 Liferay Inc. All Rights Reserved • Privacy Policy