Deploying my first model to HuggingFace spaces
Steps to get the hugging face gradio application to work
- Know before getting started
- Labeller function
- Helpful external facing markdown text
- Inference
- Hugging Face
This notebook code is the same used to create my app.py file you can find in UrbanSounds8k spaces repo.
Creating a space is simple and intuitive at Hugging Face. You will need a "Create a new space" and "Create a new model repository" at huggingfaces. You will find the interface to create a new model repository (repo) in your profile settings.
If you upload your model artifacts into your spaces repository, you will run into 404 or 403 series errors. Once you create a model repo, the installation steps of Hugging Face will have an install git lfs in the set of instructions specific to the location you clone this empty repo.
If you upload your model artifacts into your spaces repository, you will run into 404 or 403 series errors. Once you create a model repo, the installation steps of Hugging Face will have an install git lfs in the set of instructions specific to the location you clone this empty repo.
Add in that repo before copying your model the *.pkl file from the earlier step; ensure you track pkl files as a type of file managed by Git LFS
git lfs track "*.pkl"
If you miss this step you will run into 403 errors as you execute this line:
model_file = hf_hub_download("gputrain/UrbanSound8K-model", "model.pkl")
Back to spaces repo, for which I have specific requirements.txt to be able to load librosa modules to get my inference function to work. In my example, I also needed to get my labeller function to get my model to work. This requirements.txt is my spaces repo.
import gradio
from fastai.vision.all import *
from fastai.data.all import *
from pathlib import Path
import pandas as pd
from matplotlib.pyplot import specgram
import librosa
import librosa.display
from huggingface_hub import hf_hub_download
from fastai.learner import load_learner
ref_file = hf_hub_download("gputrain/UrbanSound8K-model", "UrbanSound8K.csv")
model_file = hf_hub_download("gputrain/UrbanSound8K-model", "model.pkl")
df = pd.read_csv(ref_file)
df['fname'] = df[['slice_file_name','fold']].apply (lambda x: str(x['slice_file_name'][:-4])+'.png'.strip(),axis=1 )
my_dict = dict(zip(df.fname,df['class']))
def label_func(f_name):
f_name = str(f_name).split('/')[-1:][0]
return my_dict[f_name]
model = load_learner (model_file)
labels = model.dls.vocab
with open("article.md") as f:
article = f.read()
interface_options = {
"title": "Urban Sound 8K Classification",
"description": "Fast AI example of using a pre-trained Resnet34 vision model for an audio classification task on the [Urban Sounds](https://urbansounddataset.weebly.com/urbansound8k.html) dataset. ",
"article": article,
"interpretation": "default",
"layout": "horizontal",
# Audio from validation file
"examples": ["dog_bark.wav", "children_playing.wav", "air_conditioner.wav", "street_music.wav", "engine_idling.wav",
"jackhammer.wav", "drilling.wav", "siren.wav","car_horn.wav","gun_shot.wav"],
"allow_flagging": "never"
}
def convert_sounds_melspectogram (audio_file):
samples, sample_rate = librosa.load(audio_file) #create onces with librosa
fig = plt.figure(figsize=[0.72,0.72])
ax = fig.add_subplot(111)
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.set_frame_on(False)
melS = librosa.feature.melspectrogram(y=samples, sr=sample_rate)
librosa.display.specshow(librosa.power_to_db(melS, ref=np.max))
filename = 'temp.png'
plt.savefig(filename, dpi=400, bbox_inches='tight',pad_inches=0)
plt.close('all')
return None
def predict():
img = PILImage.create('temp.png')
pred,pred_idx,probs = model.predict(img)
return {labels[i]: float(probs[i]) for i in range(len(labels))}
return labels_probs
def end2endpipeline(filename):
convert_sounds_melspectogram(filename)
return predict()
demo = gradio.Interface(
fn=end2endpipeline,
inputs=gradio.inputs.Audio(source="upload", type="filepath"),
outputs=gradio.outputs.Label(num_top_classes=10),
**interface_options,
)
launch_options = {
"enable_queue": True,
"share": False,
#"cache_examples": True,
}
demo.launch(**launch_options)
Hugging Face
The final end point once you assemble all this work through the logs (to resolve errors) is here