Our team is currently developing an AI assistant, and I am working on developing

Our team is currently developing an AI assistant, and I am working on developing a model to recognize a specific wake-up words. Although I do not have any prior experience with audio classification, I conducted research and collected two raudio datasets: one containing audio recordings of the wake-up word and the other containing recordings of background noise, with 250 recordings in each class. I then used MFCC to extract features, created a dataframe, and trained four models: SVM, RF, and two different deep learning models with varying numbers of layers. Despite achieving high accuracy during training, I am encountering issues when testing the model on live recordings. I would appreciate guidance on what steps to take next and whether my approach is correct or any tips to deal with such a project?

Place this order or similar order and get an amazing discount. USE Discount code “GET20” for 20% discount