How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/Triplex-GGUF:
# Run inference directly in the terminal:
llama-cli -hf QuantFactory/Triplex-GGUF:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf QuantFactory/Triplex-GGUF:
# Run inference directly in the terminal:
llama-cli -hf QuantFactory/Triplex-GGUF:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf QuantFactory/Triplex-GGUF:
# Run inference directly in the terminal:
./llama-cli -hf QuantFactory/Triplex-GGUF:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf QuantFactory/Triplex-GGUF:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf QuantFactory/Triplex-GGUF:
Use Docker
docker model run hf.co/QuantFactory/Triplex-GGUF:
Quick Links

QuantFactory/Triplex-GGUF

This is quantized version of SciPhi/Triplex created using llama.cpp

Original Model Card

Triplex: a SOTA LLM for knowledge graph construction.

Knowledge graphs, like Microsoft's Graph RAG, enhance RAG methods but are expensive to build. Triplex offers a 98% cost reduction for knowledge graph creation, outperforming GPT-4 at 1/60th the cost and enabling local graph building with SciPhi's R2R.

Triplex is a finetuned version of Phi3-3.8B for creating knowledge graphs from unstructured data developed by SciPhi.AI. It works by extracting triplets - simple statements consisting of a subject, predicate, and object - from text or other data sources.

image/png

Benchmark

image/png

Usage:

import json
from transformers import AutoModelForCausalLM, AutoTokenizer

def triplextract(model, tokenizer, text, entity_types, predicates):

    input_format = """
        **Entity Types:**
        {entity_types}

        **Predicates:**
        {predicates}

        **Text:**
        {text}
        """

    message = input_format.format(
                entity_types = json.dumps({"entity_types": entity_types}),
                predicates = json.dumps({"predicates": predicates}),
                text = text)

    messages = [{'role': 'user', 'content': message}]
    input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt = True, return_tensors="pt").to("cuda")
    output = tokenizer.decode(model.generate(input_ids=input_ids, max_length=2048)[0], skip_special_tokens=True)
    return output

model = AutoModelForCausalLM.from_pretrained("sciphi/triplex", trust_remote_code=True).to('cuda').eval()
tokenizer = AutoTokenizer.from_pretrained("sciphi/triplex", trust_remote_code=True)

entity_types = [ "LOCATION", "POSITION", "DATE", "CITY", "COUNTRY", "NUMBER" ]
predicates = [ "POPULATION", "AREA" ]
text = """
San Francisco,[24] officially the City and County of San Francisco, is a commercial, financial, and cultural center in Northern California. 

With a population of 808,437 residents as of 2022, San Francisco is the fourth most populous city in the U.S. state of California behind Los Angeles, San Diego, and San Jose.
"""

prediction = triplextract(model, tokenizer, text, entity_types, predicates)
print(prediction)

Downloads last month
183
GGUF
Model size
4B params
Architecture
phi3
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for QuantFactory/Triplex-GGUF

Base model

SciPhi/Triplex
Quantized
(6)
this model