I. Data

Our team uses two sources of listener data: public online comments and an in-house listening group. First, we compile comments listeners leave on sites like Youtube, Last.fm, and Soundcloud that use a frisson word (e.g. chills, goosebumps, eargasm, etc.) and have a timestamp. Second, we organize these comments to see which songs and moments appear most frequently. Third, our in-house group listens to the most frequently appearing songs and records where they experience chills. This process yields a dataset of frisson moments weighted by reliability and robustness across key demos. This dataset is what we then use to train our algorithms.

II. Models

Our algorithms use artificial intelligence techniques from an advanced branch of machine learning called “deep learning” that mimics the way the human brain identifies and learns patterns. Depending on the type of algorithm (LSTM vs. convolutional, etc.) and what data it is given to learn, an AI model will have different strengths and weaknesses (e.g. some will be better at certain genres, some will be better at certain frisson patterns, etc.). We’re actively working to balance these tradeoffs. Our team has put the two best algorithms we’ve found so far up on the site. These models will become “smarter” and more effective with time and we may add other models in the future.

Main use cases
Training data
Known biases
Known limitations
Torch

Finding and creating peak moments frisson moments across gernes

Only the best, most reliable listener data

Focuses on peaks, sometime struggles to distinguish between low vs. medium moments

Occasionally over-reacts to choruses, especially in Pop and Rock

Does not understand lyrics

Does not have context outside of song (i.e. cannot recognize covers, etc.

Lantern

Comparing mixes and commercial analysis of songs in Pop, Rock, and EDM

The biggest, broadest set of listener data

Drawn to emotional melodic music, especially orchestral sounds and vocal-heavy Pop

Occasionally struggles with Hip-hop (likely due to importance of lyrics described below)

Does not understand lyrics

Does not have context outside of song (i.e. cannot recognize covers, etc.

III. Workspace

This feature enables users to upload files and receive analyses from the Qbrio AI. 

Frisson Heatmaps

When you upload a song, the Qbrio AI predicts frisson at each second of your upload and delivers this analysis via a heatmap visual. These heatmaps and core metrics are the common units of analysis across all of the Qbrio site features.

Screen Shot 2020-08-18 at 9.55.37 AM

Song View

This is the default view for in-depth analysis and editing of one song. After viewing your initial analytics, you can make edits and re-upload the song to see if your ranking increases and where your frisson scores change.

Mix View

This features lets you compare up to four mixes of a song. The view supports synced play and toggling between tracks just like a DAW. Creative teams use this view to help avoid the common problem of “going past” the best mix of a song.

A&R View

This feature enables users to compare up to 20 songs at a time. A&R teams use this view to scout new talent and to decide which tracks to cut as singles off of new albums from in-house talent.

V. Library

This feature enables users to search the Qbrio dataset of verified listener frisson moments for creative inspiration.