Captivated as a baby by video video games and puzzles, Marzyeh Ghassemi was additionally fascinated at an early age in well being. Luckily, she discovered a path the place she may mix the 2 pursuits.
“Although I had considered a career in health care, the pull of computer science and engineering was stronger,” says Ghassemi, an affiliate professor in MIT’s Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science (IMES) and principal investigator at the Laboratory for Information and Decision Systems (LIDS). “When I found that computer science broadly, and AI/ML specifically, could be applied to health care, it was a convergence of interests.”
Today, Ghassemi and her Healthy ML analysis group at LIDS work on the deep research of how machine learning (ML) may be made extra sturdy, and be subsequently utilized to enhance security and fairness in well being.
Growing up in Texas and New Mexico in an engineering-oriented Iranian-American household, Ghassemi had function fashions to comply with into a STEM profession. While she liked puzzle-based video video games — “Solving puzzles to unlock other levels or progress further was a very attractive challenge” — her mom additionally engaged her in extra superior math early on, attractive her towards seeing math as greater than arithmetic.
“Adding or multiplying are basic skills emphasized for good reason, but the focus can obscure the idea that much of higher-level math and science are more about logic and puzzles,” Ghassemi says. “Because of my mom’s encouragement, I knew there were fun things ahead.”
Ghassemi says that along with her mom, many others supported her mental growth. As she earned her undergraduate diploma at New Mexico State University, the director of the Honors College and a former Marshall Scholar — Jason Ackelson, now a senior advisor to the U.S. Department of Homeland Security — helped her to use for a Marshall Scholarship that took her to Oxford University, the place she earned a grasp’s diploma in 2011 and first took an interest within the new and quickly evolving area of machine learning. During her PhD work at MIT, Ghassemi says she obtained help “from professors and peers alike,” including, “That environment of openness and acceptance is something I try to replicate for my students.”
While engaged on her PhD, Ghassemi additionally encountered her first clue that biases in well being knowledge can disguise in machine learning fashions.
She had educated fashions to foretell outcomes utilizing well being knowledge, “and the mindset at the time was to use all available data. In neural networks for images, we had seen that the right features would be learned for good performance, eliminating the need to hand-engineer specific features.”
During a assembly with Leo Celi, principal analysis scientist at the MIT Laboratory for Computational Physiology and IMES and a member of Ghassemi’s thesis committee, Celi requested if Ghassemi had checked how properly the fashions carried out on sufferers of various genders, insurance coverage varieties, and self-reported races.
Ghassemi did examine, and there have been gaps. “We now have almost a decade of work showing that these model gaps are hard to address — they stem from existing biases in health data and default technical practices. Unless you think carefully about them, models will naively reproduce and extend biases,” she says.
Ghassemi has been exploring such points ever since.
Her favourite breakthrough within the work she has executed happened in a number of elements. First, she and her analysis group confirmed that learning fashions may acknowledge a affected person’s race from medical photos like chest X-rays, which radiologists are unable to do. The group then discovered that fashions optimized to carry out properly “on average” didn’t carry out as properly for ladies and minorities. This previous summer time, her group mixed these findings to present that the extra a mannequin discovered to foretell a affected person’s race or gender from a medical picture, the more severe its efficiency hole can be for subgroups in these demographics. Ghassemi and her group discovered that the issue might be mitigated if a mannequin was educated to account for demographic variations, as a substitute of being targeted on total common efficiency — however this course of must be carried out at each website the place a mannequin is deployed.
“We are emphasizing that models trained to optimize performance (balancing overall performance with lowest fairness gap) in one hospital setting are not optimal in other settings. This has an important impact on how models are developed for human use,” Ghassemi says. “One hospital might have the resources to train a model, and then be able to demonstrate that it performs well, possibly even with specific fairness constraints. However, our research shows that these performance guarantees do not hold in new settings. A model that is well-balanced in one site may not function effectively in a different environment. This impacts the utility of models in practice, and it’s essential that we work to address this issue for those who develop and deploy models.”
Ghassemi’s work is knowledgeable by her id.
“I am a visibly Muslim woman and a mother — both have helped to shape how I see the world, which informs my research interests,” she says. “I work on the robustness of machine learning models, and how a lack of robustness can combine with existing biases. That interest is not a coincidence.”
Regarding her thought course of, Ghassemi says inspiration typically strikes when she is open air — bike-riding in New Mexico as an undergraduate, rowing at Oxford, working as a PhD scholar at MIT, and today strolling by the Cambridge Esplanade. She additionally says she has discovered it useful when approaching a difficult drawback to consider the elements of the bigger drawback and attempt to perceive how her assumptions about every half is perhaps incorrect.
“In my experience, the most limiting factor for new solutions is what you think you know,” she says. “Sometimes it’s hard to get past your own (partial) knowledge about something until you dig really deeply into a model, system, etc., and realize that you didn’t understand a subpart correctly or fully.”
As passionate as Ghassemi is about her work, she deliberately retains observe of life’s greater image.
“When you love your research, it can be hard to stop that from becoming your identity — it’s something that I think a lot of academics have to be aware of,” she says. “I try to make sure that I have interests (and knowledge) beyond my own technical expertise.
“One of the best ways to help prioritize a balance is with good people. If you have family, friends, or colleagues who encourage you to be a full person, hold on to them!”
Having gained many awards and far recognition for the work that encompasses two early passions — laptop science and well being — Ghassemi professes a religion in seeing life as a journey.
“There’s a quote by the Persian poet Rumi that is translated as, ‘You are what you are looking for,’” she says. “At every stage of your life, you have to reinvest in finding who you are, and nudging that towards who you want to be.”