Instructions to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") model = AutoModelForSpeechSeq2Seq.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") - Notebooks
- Google Colab
- Kaggle
| { | |
| "_from_model_config": true, | |
| "bos_token_id": 100257, | |
| "eos_token_id": 100257, | |
| "output_attentions": false, | |
| "output_hidden_states": false, | |
| "pad_token_id": 100256, | |
| "transformers_version": "5.3.0.dev0", | |
| "use_cache": true | |
| } | |