Search

Wiki Home

Manage your vocal synth model

Manage your vocal synth model

image

Versions

AI learns incrementally from your data, analyzing each sample in a step-by-step process. As learning deepens, the number of steps increases. Training with a small or limited-quality dataset, such as one not designed for singing but for speech, may require only a few steps. In contrast, a larger and more diverse dataset might necessitate additional steps for a thorough fit. However, excessive training steps can lead to overfitting, potentially degrading the performance of your voice model with unpredictable outcomes.

By the end of training, you will get several versions based on different training steps from Rare to Well-done. You can find the best version by switching deployment and comparing each other.

Blend voices

Blending voices results in a hybrid voice. You can customize your voice model to sound more like your target voice by adjusting the ratios of the blended voices. To do this, navigate to the slots management page and click the ‘blend voices’ button located under each version.

After blending, your model will adopt the new voice characteristics. To apply these changes, you will need to refresh your model by restarting ACE Studio.

Deploy to ACE Studio

For Basic custom slots and Pro custom slots, after deploying a version, you can switch deployment from one version to another. You need to re-launch ACE Studio after each deployment to refresh your voice library.