Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
Shubham Saha's picture
2

Shubham Saha

thebongcook
ยท

AI & ML interests

None yet

Recent Activity

updated a Space about 8 hours ago
thebongcook/sandbox-5aef9f95
upvoted a collection about 2 months ago
Qwen3.5
reacted to ImranzamanML's post with ๐Ÿ‘ about 1 year ago
Here is how we can calculate the size of any LLM model: Each parameter in LLM models is typically stored as a floating-point number. The size of each parameter in bytes depends on the precision. 32-bit precision: Each parameter takes 4 bytes. 16-bit precision: Each parameter takes 2 bytes To calculate the total memory usage of the model: Memory usage (in bytes) = No. of Parameters ร— Size of Each Parameter For example: 32-bit Precision (FP32) In 32-bit floating-point precision, each parameter takes 4 bytes. Memory usage in bytes = 1 billion parameters ร— 4 bytes 1,000,000,000 ร— 4 = 4,000,000,000 bytes In gigabytes: โ‰ˆ 3.73 GB 16-bit Precision (FP16) In 16-bit floating-point precision, each parameter takes 2 bytes. Memory usage in bytes = 1 billion parameters ร— 2 bytes 1,000,000,000 ร— 2 = 2,000,000,000 bytes In gigabytes: โ‰ˆ 1.86 GB It depends on whether you use 32-bit or 16-bit precision, a model with 1 billion parameters would use approximately 3.73 GB or 1.86 GB of memory, respectively.
View all activity

Organizations

Gaudiy, Inc.'s profile picture ML intern explorers's profile picture Gaudiy AI Research's profile picture

spaces 1

Running

ml-intern sandbox

๐ŸŒ

about 8 hours ago

models 0

None public yet

datasets 0

None public yet
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs