Instructions to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") model = AutoModelForSpeechSeq2Seq.from_pretrained("onnx-internal-testing/tiny-random-GraniteSpeechForConditionalGeneration") - Notebooks
- Google Colab
- Kaggle
| { | |
| "audio_processor": { | |
| "feature_extractor_type": "GraniteSpeechFeatureExtractor", | |
| "melspec_kwargs": { | |
| "hop_length": 160, | |
| "n_fft": 512, | |
| "n_mels": 8, | |
| "sample_rate": 16000, | |
| "win_length": 400 | |
| }, | |
| "projector_downsample_rate": 1, | |
| "projector_window_size": 3, | |
| "sampling_rate": 16000 | |
| }, | |
| "audio_token": "<|audio|>", | |
| "processor_class": "GraniteSpeechProcessor" | |
| } | |