Millions of songs and videos are available to all of us through the Internet. To allow users to retrieve the desired content, algorithms for automatic analysis, indexing and recommendation of this content are a must.
I will discuss some aspects of automated music analysis for music search and recommendation: i) automated music tagging (e.g., identify ``funky jazz with male vocals'' based on music audio), and ii) (audio) content-based music recommendation, to provide a list of relevant or similar song recommendations given one or more seed songs (e.g., playlist generation for online radio). Our most recent research on context-aware recommendation takes this one step further, by leveraging various wearable sensors (e.g., in smartphones) to infer user context (activity, mood) and provide recommendations accordingly, without requiring an active user query (zero click).
Finally, I will show how this technology can be readily extended to analyze and recommend video content for a variety of applications, integrating audio and visual cues.