June 14, 2024
Women in AI: Catherine Breslin helps companies develop AI strategies


To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch has been publishing a series of interviews focused on remarkable women who’ve contributed to the AI revolution. We’re publishing these pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Catherine Breslin is the founder and director of Kingfisher Labs, where she helps companies develop AI strategies. She has spent more than two decades as an AI scientist and has worked for Cambridge University, Toshiba Research, and even Amazon Alexa. She was previously an adviser to the VC fund Deeptech Labs and was the Solutions Architect Director at Cobalt Speech & Language.

She attended Oxford University for undergrad before receiving her master’s and PhD at the University of Cambridge.

Briefly, how did you get your start in AI? What attracted you to the field? 

I always loved maths and physics at school and I chose to study engineering at university. That’s where I first learned about AI, though it wasn’t called AI at the time. I got intrigued by the idea of using computers to do the speech and language processing that we humans find easy. From there, I ended up studying for a PhD in voice technology and working as a researcher. We’re at a point in time where there’ve been huge steps forward for AI recently, and I feel like there’s a huge opportunity to build technology that improves people’s lives.

What work are you most proud of in the AI field?

In 2020, in the early days of the pandemic, I founded my own consulting company with the mission to bring real-world AI expertise and leadership to organizations. I’m proud of the work I’ve done with my clients across different and interesting projects and also that I’ve been able to do this in a truly flexible way around my family.

How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?  

It’s hard to measure exactly, but something like 20% of the AI field is women. My perception is also that the percentage gets lower as you get more senior. For me, one of the best ways to navigate this is by building a supportive network. Of course, support can come from people of any gender. Sometimes, though, it’s reassuring to talk to women who are facing similar situations or who’ve seen the same problems, and it’s great not to feel alone.

The other thing for me is to think carefully about where to spend my energy. I believe that we’ll only see lasting change when more women get into senior and leadership positions, and that won’t happen if women spend all their energy on fixing the system rather than advancing their careers. There’s a pragmatic balance to be had between pushing for change and focusing on my own daily work.

What advice would you give to women seeking to enter the AI field?

AI is a huge and exciting field with a lot going on. There’s also a huge amount of noise with what can seem like a constant stream of papers, products, and models being released. It’s impossible to keep up with everything. Further, not every paper or research result is going to be significant in the long run, no matter how flashy the press release. My advice is to find a niche that you’re really interested in making progress in, learn everything you can about that niche, and tackle the problems that you’re motivated to solve. That’ll give you the solid foundation that you need.

What are some of the most pressing issues facing AI as it evolves?

Progress in the past 15 years has been fast, and we’ve seen AI move out of the lab and into products without really having stepped back to properly assess the situation and anticipate the consequences. One example that comes to mind is how much of our voice and language technology performs better in English than other languages. That’s not to say that researchers have ignored other languages. Significant effort has been put into non-English language technology. Yet, the unintended consequence of better English language technology means that we’re building and rolling out technology that doesn’t serve everyone equally.

What are some issues AI users should be aware of?

I think people should be aware that AI isn’t a silver bullet that’ll solve all problems in the next few years. It can be quick to build an impressive demo but takes a lot of dedicated effort to build an AI system that consistently works well. We shouldn’t lose sight of the fact that AI is designed and built by humans, for humans.

What is the best way to responsibly build AI?

Responsibly building AI means including diverse views from the outset, including from your customers and anyone impacted by your product. Thoroughly testing your systems is important to be sure you know how well they work across a variety of scenarios. Testing gets the reputation of being boring work compared to the excitement of dreaming up new algorithms. Yet, it’s critical to know if your product really works. Then there’s the need to be honest with yourself and your customers about both the capability and limitations of what you’re building so that your system doesn’t get misused.

How can investors better push for responsible AI? 

Startups are building many new applications of AI, and investors have a responsibility to be thoughtful about what they’re choosing to fund. I’d love to see more investors be vocal about their vision for the future that we’re building and how responsible AI fits in.



Source link