A: The behavior of each model is highly dependent on the nature of the data on which it was trained. Models trained on percussive sound perform well with percussive input, while models trained on sustained sound tend to perform well with sustained input.
If you find the output of a model unsatisfactory, please try input sounds that have similar characteristics to the model's training data.
A: With Neutone Morpho we created an instrument that is specialised on tone morphing. The "old" Neutone FX plugin can handle many different types of machine learning architecture.
We plan to keep Neutone FX free and will maintain in the future so that audio researchers have an easy way to put their AI models into the DAW.
A: Individual models can be deleted from the model browser by clicking on a model’s “more info” button. The settings menu lets you delete all downloaded models in one go, as well as change the location of your user folder that stores your models. By default, the user folder is located at:
On Mac: ~/Library/Application Support/Neutone/NeutoneMorpho
On Windows: C:\Users\<your username>\AppData\Roaming\Neutone\NeutoneMorpho\
A: Of course. Sounds you make with all models, including Artist models, can be used commercially in your projects.
A: Morpho models are exclusively trained on sounds for which we have the rights to use. Our sources range from public domain recordings to sounds we captured ourselves. For ultimate transparency, we display exactly what data was used for each model in the model browser.
A: Yes! Please see the Model Training section below for more information.
A: Yes, we offer 50% off for students. Please send a copy of your student/faculty ID to support@neutone.ai to apply for the discount. It might take a couple of days for us to get back to you.
We also have a Discord chat where you are more than welcome to ask questions or share your sounds
Model Training
Our model training service allows anyone to create their own unique Neutone Morpho models and play them within the plugin.
We believe in personal and customizable tools for self-expression, and training your own Morpho model is a great way to bottle your sonic DNA and hear it from a fresh perspective.
To start visit your Training Dashboard, purchase a token and begin a new training session. You just need to upload your audio data and our engineers will handle the training process.
Data Preparation
To train a model you need a good dataset.
Data amount
More than 1 hour of unique, non-repeating audio is ideal. This can be lots of snippets or a single long recording.
Diversity
Too little diversity results in models that output the same thing over and over. Too much diversity and the Morpho model tends to fail. Compiling recordings of a single instrument should generally be just the right amount of diversity.
Sound quality
Any noise present in the original data will be learned and replicated by the model. Ensure your audio is free from noise to account for this.
Best Practices
Consider all the possible inputs you might use with your model and all the ways these can differ from each other. Volume, frequency, attack and sustain characteristics are all unique to the model and are informed by the training data.
Input sounds that differ too greatly in these respects can result in the model breaking down and producing noise. For example, if you train a model exclusively on dark sounds, you might get noisy results when inputting bright sounds. With this in mind, we recommend enhancing your data with additional sections that help generalize your model. This could mean varying the pitch or speed, applying EQ or filtering, or varying the volume of the audio.
More Training Questions
A: No, you must have a full license for Neutone Morpho in order to train your own models.
A: No, trained models will only be available to the user who trained the model. However, please get in touch if you want to discuss the addition of your model to the Morpho Store page.
A: The user who trained the model using Cocoon retains exclusive ownership.
A: Not unless you have permission from the copyright holder.
A: We currently support .wav, .aiff, .flac and .mp3 formats.
A: Each training token allows for one retraining attempt if users are not satisfied with their model.