You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the Babble App stores one calibration. This could be enhanced if calibrations could be stored per user AND avatar, with a default/fallback config per user.
These could then be loaded manually OR automatically when a user's avatar changes (say in VRChat via OSC). This could be represented by a tree dropdown selector, and be enabled/disabled like so:
...
Automatically apply per-avatar configuration
User 1
Avatar A
Avatar B
...
Default
User 2
Avatar A
Avatar B
...
Default
...
With options to edit/delete entries as required.
The text was updated successfully, but these errors were encountered:
Perhaps we can have a model "global" calibration like what we have now and have profile selectable "modifiers" that are applied on top of the global calibration. For example, there could be a second normalization someone can use to "boost" the output of a specific shape. Another example could be utilizing a curve for nonlinear shape activations.
Currently, the Babble App stores one calibration. This could be enhanced if calibrations could be stored per user AND avatar, with a default/fallback config per user.
These could then be loaded manually OR automatically when a user's avatar changes (say in VRChat via OSC). This could be represented by a tree dropdown selector, and be enabled/disabled like so:
...
...
With options to edit/delete entries as required.
The text was updated successfully, but these errors were encountered: