runtime error
Exit code: 1. Reason: ⏳ Loading Legal AI model ... config.json: 0%| | 0.00/1.56k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.56k/1.56k [00:00<00:00, 4.83MB/s] `torch_dtype` is deprecated! Use `dtype` instead! Traceback (most recent call last): File "/app/app.py", line 18, in <module> base_model = AutoModelForCausalLM.from_pretrained( base_model_name, torch_dtype=torch.float16, device_map="auto" ) File "/usr/local/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 374, in from_pretrained return model_class.from_pretrained( ~~~~~~~~~~~~~~~~~~~~~~~~~~~^ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 3964, in from_pretrained hf_quantizer, config, device_map = get_hf_quantizer( ~~~~~~~~~~~~~~~~^ config, quantization_config, device_map, weights_only, user_agent ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/quantizers/auto.py", line 326, in get_hf_quantizer hf_quantizer.validate_environment( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ device_map=device_map, ^^^^^^^^^^^^^^^^^^^^^^ weights_only=weights_only, ^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 60, in validate_environment raise ImportError( f"Using `bitsandbytes` 4-bit quantization requires bitsandbytes: `pip install -U bitsandbytes>={BITSANDBYTES_MIN_VERSION}`" ) ImportError: Using `bitsandbytes` 4-bit quantization requires bitsandbytes: `pip install -U bitsandbytes>=0.46.1`
Container logs:
Fetching error logs...