Fireworks
You are currently on a page documenting the use of Fireworks models as text completion models. Many popular Fireworks models are chat completion models.
You may be looking for this page instead.
Fireworks accelerates product development on generative AI by creating an innovative AI experiment and production platform.
This example goes over how to use LangChain to interact with Fireworks
models.
Overview
Integration details
Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
Fireworks | langchain_fireworks | ❌ | ❌ | ✅ |
Setup
Credentials
Sign in to Fireworks AI for the an API Key to access our models, and make sure it is set as the FIREWORKS_API_KEY
environment variable.
3. Set up your model using a model id. If the model is not set, the default model is fireworks-llama-v2-7b-chat. See the full, most up-to-date model list on fireworks.ai.
import getpass
import os
if "FIREWORKS_API_KEY" not in os.environ:
os.environ["FIREWORKS_API_KEY"] = getpass.getpass("Fireworks API Key:")
Installation
You need to install the langchain_fireworks
python package for the rest of the notebook to work.
%pip install -qU langchain-fireworks
Note: you may need to restart the kernel to use updated packages.
Instantiation
from langchain_fireworks import Fireworks
# Initialize a Fireworks model
llm = Fireworks(
model="accounts/fireworks/models/mixtral-8x7b-instruct",
base_url="https://api.fireworks.ai/inference/v1/completions",
)
Invocation
You can call the model directly with string prompts to get completions.
output = llm.invoke("Who's the best quarterback in the NFL?")
print(output)
If Manningville Station, Lions rookie EJ Manuel's