Examples#
Chat Demo#
This is a example code for chat demo with gradio.
Install necessary packages:
pip install gradio git+https://github.com/HyperbeeAI/hive-python
demo.py file:
from hyperbee import Hive
import gradio as gr
import os
client = Hive(
api_key=os.environ["HIVE_API_KEY"],
)
def predict(message, history):
history_openai_format = []
for human, assistant in history:
history_openai_format.append({"role": "user", "content": human})
history_openai_format.append({"role": "assistant", "content": assistant})
history_openai_format.append({"role": "user", "content": message})
response = client.chat.completions.create(
model="hive", messages=history_openai_format, temperature=0.4, stream=True
)
partial_message = ""
for chunk in response:
if chunk.choices[0].delta.content is not None:
partial_message = partial_message + chunk.choices[0].delta.content
yield partial_message
gr.ChatInterface(predict).launch(server_name="0.0.0.0", server_port=8080, share=True)
Run the demo.py file and open the browser with the link provided in the terminal.
python demo.py
Usage#
import os
from hyperbee import Hive
client = Hive(
# This is the default and can be omitted
api_key=os.environ.get("HIVE_API_KEY"),
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="hive",
)
While you can provide an api_key keyword argument, we recommend
using python-dotenv to add
HIVE_API_KEY="My API Key" to your .env file so that your API Key
is not stored in source control.
Async usage#
Simply import AsyncHive instead of Hive and use await with
each API call:
import os
import asyncio
from hyperbee import AsyncHive
client = AsyncHive(
# This is the default and can be omitted
api_key=os.environ.get("HIVE_API_KEY"),
)
async def main() -> None:
chat_completion = await client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Say this is a test",
}
],
model="hive",
)
asyncio.run(main())
Functionality between the synchronous and asynchronous clients is otherwise identical.