xlelords/orbis-coder
Viewer • Updated • 11.2k • 32
How to use xlelords/orbis-coder with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="xlelords/orbis-coder", filename="orbis-coding.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
How to use xlelords/orbis-coder with llama.cpp:
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf xlelords/orbis-coder # Run inference directly in the terminal: llama-cli -hf xlelords/orbis-coder
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf xlelords/orbis-coder # Run inference directly in the terminal: llama-cli -hf xlelords/orbis-coder
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf xlelords/orbis-coder # Run inference directly in the terminal: ./llama-cli -hf xlelords/orbis-coder
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf xlelords/orbis-coder # Run inference directly in the terminal: ./build/bin/llama-cli -hf xlelords/orbis-coder
docker model run hf.co/xlelords/orbis-coder
How to use xlelords/orbis-coder with Ollama:
ollama run hf.co/xlelords/orbis-coder
How to use xlelords/orbis-coder with Unsloth Studio:
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for xlelords/orbis-coder to start chatting
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for xlelords/orbis-coder to start chatting
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for xlelords/orbis-coder to start chatting
How to use xlelords/orbis-coder with Pi:
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf xlelords/orbis-coder
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
"providers": {
"llama-cpp": {
"baseUrl": "http://localhost:8080/v1",
"api": "openai-completions",
"apiKey": "none",
"models": [
{
"id": "xlelords/orbis-coder"
}
]
}
}
}# Start Pi in your project directory: pi
How to use xlelords/orbis-coder with Hermes Agent:
# Install llama.cpp: brew install llama.cpp # Start a local OpenAI-compatible server: llama-server -hf xlelords/orbis-coder
# Install Hermes: curl -fsSL https://hermes-agent.nousresearch.com/install.sh | bash hermes setup # Point Hermes at the local server: hermes config set model.provider custom hermes config set model.base_url http://127.0.0.1:8080/v1 hermes config set model.default xlelords/orbis-coder
hermes
How to use xlelords/orbis-coder with Docker Model Runner:
docker model run hf.co/xlelords/orbis-coder
How to use xlelords/orbis-coder with Lemonade:
# Download Lemonade from https://lemonade-server.ai/ lemonade pull xlelords/orbis-coder
lemonade run user.orbis-coder-{{QUANT_TAG}}lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)Not recommended Yet As it's still in construction so expect bugs and bad performance
We're not able to determine the quantization variants.
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="xlelords/orbis-coder", filename="orbis-coding.gguf", )