AI & ML interests

None defined yet.

Recent Activity

shng2025  published a Space about 18 hours ago
iteratehack/edth-warsaw
wyksdsg  updated a Space 8 days ago
iteratehack/deepbattler
ylop  updated a Space 8 days ago
iteratehack/team222
View all activity

ylop 
updated a Space 8 days ago
ylop 
published a Space 8 days ago
PiotrPasztor 
updated a Space 8 days ago
Jabuszko 
updated a Space 8 days ago
reach-vb 
posted an update 6 months ago
view post
Post
5974
Excited to onboard FeatherlessAI on Hugging Face as an Inference Provider - they bring a fleet of 6,700+ LLMs on-demand on the Hugging Face Hub 🤯

Starting today, you'd be able to access all those LLMs (OpenAI compatible) on HF model pages and via OpenAI client libraries too! 💥

Go, play with it today: https://huggingface.co/blog/inference-providers-featherless

P.S. They're also bringing on more GPUs to support all your concurrent requests!
  • 1 reply
·
reach-vb 
posted an update 7 months ago
view post
Post
4643
hey hey @mradermacher - VB from Hugging Face here, we'd love to onboard you over to our optimised xet backend! 💥

as you know we're in the process of upgrading our storage backend to xet (which helps us scale and offer blazingly fast upload/ download speeds too): https://huggingface.co/blog/xet-on-the-hub and now that we are certain that the backend can scale with even big models like Llama 4/ Qwen 3 - we;re moving to the next phase of inviting impactful orgs and users on the hub over as you are a big part of the open source ML community - we would love to onboard you next and create some excitement about it in the community too!

in terms of actual steps - it should be as simple as one of the org admins to join hf.co/join/xet - we'll take care of the rest.

p.s. you'd need to have a the latest hf_xet version of huggingface_hub lib but everything else should be the same: https://huggingface.co/docs/hub/storage-backends#using-xet-storage

p.p.s. this is fully backwards compatible so everything will work as it should! 🤗
·
ngxson 
posted an update 9 months ago
view post
Post
5462
A comprehensive matrix for which format should you use.

Read more on my blog post: https://huggingface.co/blog/ngxson/common-ai-model-formats

| Hardware        | GGUF      | PyTorch                | Safetensors              | ONNX  |
|-----------------|-----------|------------------------|--------------------------|-------|
| CPU             | ✅ (best) | 🟡                      | 🟡                       ||
| GPU             |||||
| Mobile          || 🟡 (via executorch)     |||
| Apple silicon   || 🟡                      | ✅ (via MLX framework)   ||
  • 1 reply
·