LlamaIndex Llms Integration: Anyscale
Installation
%pip install llama-index-llms-anyscale
!pip install llama-index
Basic Usage
from llama_index.llms.anyscale import Anyscale
from llama_index.core.llms import ChatMessage
llm = Anyscale(api_key="<your-api-key>")
message = ChatMessage(role="user", content="Tell me a joke")
resp = llm.chat([message])
print(resp)
Streaming Example
message = ChatMessage(role="user", content="Tell me a story in 250 words")
resp = llm.stream_chat([message])
for r in resp:
print(r.delta, end="")
Completion Example
resp = llm.complete("Tell me a joke")
print(resp)
Streaming Completion Example
resp = llm.stream_complete("Tell me a story in 250 words")
for r in resp:
print(r.delta, end="")
Model Configuration
llm = Anyscale(model="codellama/CodeLlama-34b-Instruct-hf")
resp = llm.complete("Show me the c++ code to send requests to HTTP Server")
print(resp)
LLM Implementation example
https://docs.llamaindex.ai/en/stable/examples/llm/anyscale/