My career grew on me rather than the other way around of “find your passion”. During my MS and PhD work, I focused on signal processing (for acoustics/audio/speech) and machine learning (in particular, neural networks and statistical pattern recognition). I enjoy the creativity involved in overcoming challenges for problems in various disciplines (acoustics, perception, life sciences, speech/language) by using signal processing and machine learning.
During my PhD, I was introduced to an audio class by my soon-to-become doctoral advisor Prof. Chris Kyriakakis. The class was very well taught by him and I felt it had a lot of open problems to research and come up with unique solutions applying signal processing, cognition, and pattern recognition to help better generalization (over a space – rooms/humans).
At Hewlett Packard Labs® (HP Labs), the research is incredibly diverse. Not only am I addressing problems in VR spatial audio and Immersive Audio (IA) rendering in ECL, but also audio classification, speech analysis, modelling and representations of cyber-physical systems, life sciences (early cancer detection) with Print Adjacencies and 3D Print Lab, EEG signal analysis and interpretation (with Immersive Experiences Lab), developing AI models for Hollywood content, etc. Brüel & Kjær’s Head and Torso Simulator is frequently used for a number of these important projects – both from an in-ear signal capture and a speech reproduction perspective. The HATS system allows perceptually and acoustically relevant measurements, while allowing consistency and reproducibility.
The most challenging aspect for me involves converting research into technologies that matter. The rewarding aspect is having the opportunity to work with a diverse group of extremely bright and talented people, with experience ranging from labs to strategies, and see these technologies go into HP products. I also get to work on some blue-sky projects and fundamental research that does not necessarily have immediate business impact but could end up seeing the light of day in the mid- to long-term.
The defining moment for me was seeing Audyssey Labs™, with its foundations built around the core technology of room acoustic equalization (MultEQ®), spun out from USC. MultEQ was a product of my PhD research, and is now deployed in millions of products from professional (IMAX®) to consumer (Denon®, Onkyo®, Sharp® and Audi®, for example).
I would like to travel a bit more to experience different cultures and continue my scuba-diving adventures. Now, that our five-year-old twins (Ariana and Ashwin) are easier to travel with, my wife Laurie and I have started vacationing. We are hoping to make a trip to Barcelona in the next few months.
My dad was a key person who shaped my future in terms of my early career choice. He juggled both family and work life quite well, despite being a CEO for a multinational engineering company. I remember seeing him bring his work home every day (when he was not travelling abroad) and only getting to it after spending his evening completely with us, and only after we all went to bed. He was the last to go to bed and the first to wake up. I admired his dedication to work and his ethic.
Attending various Audio Engineering Society and Acoustical Society of America conferences, I noticed a regular trend in terms of what researchers were doing for improving audio renderings, but what was not seriously investigated was the influence of room, reverberation and loudspeaker acoustics in audio rendering. The research I spearheaded led to MultEQ and a Best Paper Award at the 2003 37th IEEE Asilomar Conf. on Signals, Systems and Computers, a textbook (Immersive Audio Signal Processing from Springer-Verlag) as well as numerous products from licensees that won the best of CES awards in various categories over the years.
Object-based audio and the ability to pinpoint objects in 3D space (cinemas, homes, headphones) are significant steps toward giving precise localization for discrete sound events while giving the benefit of envelopment on spatially and signal decorrelated content is the correct next step.
Some of the work I am involved in includes researching new techniques requiring deep-learning and AI for analysis of traditional 5.1 and object-based audio.
It helps advance the state-of-the-art audio/acoustical solutions, provide innovation and IP for HP, and most importantly, improve the quality of experience (QoE) on HP devices while saving dollars.
Sunil G Bharitkar
Distinguished Member of Technical Staff, Emerging Compute Lab (ECL), HP Labs