Convnext Small
ConvNeXt Small model designed as a balanced, pure convolutional backbone that bridges the gap between efficiency and high-performance Vision Transformers. Originally introduced by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. in the modernized paper, A ConvNet for the 2020s, this model adopts "Transformer-like" design choices—including depthwise convolutions, inverted bottlenecks, and fewer activation/normalization layers—to achieve superior scalability. With approximately 50M parameters and 8.7 GFLOPs, it provides a "modernized" ResNet alternative that competes favorably with Swin-T in terms of accuracy and throughput for general vision tasks.
Model description
The model was converted from a checkpoint from PyTorch Vision.
The original model has:
acc@1 (on ImageNet-1K): 83.616%
acc@5 (on ImageNet-1K): 96.65%
num_params: 50223688
Intended uses & limitations
The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.
How to Use
1. Install Dependencies
Ensure your Python environment is set up with the required libraries. Run the following command in your terminal:
pip install numpy Pillow huggingface_hub ai-edge-litert
2. Prepare Your Image
The script expects an image file to analyze. Make sure you have an image (e.g., cat.jpg or car.png) saved in the same working directory as your script.
3. Save the Script
Create a new file named classify.py, paste the script below into it, and save the file:
#!/usr/bin/env python3
import argparse, json
import numpy as np
from PIL import Image
from huggingface_hub import hf_hub_download
from ai_edge_litert.compiled_model import CompiledModel
def preprocess(img: Image.Image) -> np.ndarray:
img = img.convert("RGB")
w, h = img.size
s = 230
if w < h:
img = img.resize((s, int(round(h * s / w))), Image.BILINEAR)
else:
img = img.resize((int(round(w * s / h)), s), Image.BILINEAR)
left = (img.size[0] - 224) // 2
top = (img.size[1] - 224) // 2
img = img.crop((left, top, left + 224, top + 224))
x = np.asarray(img, dtype=np.float32) / 255.0
x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
[0.229, 0.224, 0.225], dtype=np.float32
)
return np.expand_dims(x, axis=0)
def main():
ap = argparse.ArgumentParser()
ap.add_argument("--image", required=True)
args = ap.parse_args()
model_path = hf_hub_download("litert-community/convnext_small", “convnext_small.tflite")
labels_path = hf_hub_download(
"huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
)
with open(labels_path, "r", encoding="utf-8") as f:
id2label = {int(k): v for k, v in json.load(f).items()}
img = Image.open(args.image)
x = preprocess(img)
model = CompiledModel.from_file(model_path)
inp = model.create_input_buffers(0)
out = model.create_output_buffers(0)
inp[0].write(x)
model.run_by_index(0, inp, out)
req = model.get_output_buffer_requirements(0, 0)
y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)
pred = int(np.argmax(y))
label = id2label.get(pred, f"class_{pred}")
print(f"Top-1 class index: {pred}")
print(f"Top-1 label: {label}")
if __name__ == "__main__":
main()
4. Execute the Python Script
Run the below command:
python classify.py --image cat.jpg
BibTeX entry and citation info
@misc{liu2022convnet2020s,
title={A ConvNet for the 2020s},
author={Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
year={2022},
eprint={2201.03545},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2201.03545},
}
- Downloads last month
- 37
Dataset used to train litert-community/convnext_small
Paper for litert-community/convnext_small
Evaluation results
- Top 1 Accuracy (Full Precision) on ImageNet-1kvalidation set self-reported0.836
- Top 5 Accuracy (Full Precision) on ImageNet-1kvalidation set self-reported0.967