Promoted to the role of Chief Technology Officer in May 2022, Mira Murati, 34, has been instrumental in steering OpenAI’s strategy to test its tools publicly. She has effectively operated as the company’s head of operations, according to current and former employees. She ensured the timely development of ChatGPT versions by the engineering team, managed OpenAI’s relationship with Microsoft, an investor, and partner deploying its technology, and played a pivotal role in shaping the company’s artificial intelligence policy in Washington and Europe. While advancements in AI research, particularly in understanding natural language processing and computer vision, have been ongoing for a decade, much of the breakthroughs were confined to the tech giant’s secretive projects.
OpenAI, under Mira Murati’s leadership, translates academic research into practical products, making AI more accessible. This approach, combined with a team of top-notch academics, has sparked widespread public interest in AI.
- Mira Murati, was born in Albania and raised in Canada. She showcased her mechanical engineering skills by building a hybrid race car during her time at Dartmouth College.
- She has worked in aerospace, automotive, virtual reality (VR), and augmented reality (AR). After this, Ms Murati joined Elon Musk’s Tesla as a senior product manager. She played a key role in the development of the Model X.
- Ms Murati was also associated with a VR company, Leap Motion, where she focused on implementing applications of artificial intelligence in practical, real-world scenarios.
- She is trilingual, speaking Italian, Albanian, and English.
- Ms Murati joined OpenAI in 2018 in the capacity of supercomputing strategy and managing research teams. She was also part of the leadership team and used to help in the implementation of the decisions made by the team.
- Last year, Mr Murati was given the responsibility to look after the distribution of ChatGPT.
A year ago, ChatGPT did not exist to the general public. Today, it is one of the most talked-about artificial intelligence products in the world, and the person responsible for it is a 34-year-old engineer Mira Murati. Ms Murati was today appointed as the interim CEO of OpenAI after the board lost confidence in Sam Altman.
Not only ChatGPT, Ms Murati was also responsible for promoting Dall-E, an AI model that generates images from text. Both the offerings by OpenAI have come into the limelight after deepfake videos of actors Rashmika Mandanna, Katrina Kaif, and Kajol went viral.
Prime Minister Narendra Modi also expressed concern yesterday over the misuse of AI and urged the ChatGPT team to put riders in place to stop the creation of such morphed media.
Mira Murati, speaking on a talk show, explained how the company is working to prevent the creation of morphed media using their technologies.
“We have chosen to make Dall-E available to the public but with certain guardrails and with certain constraints,” she said while speaking to comedian Trevor Noah last year. “We do want people to understand what AI is capable of. But right now, we don’t feel very comfortable around the mitigation of misinformation, and so we do have certain guardrails,” Mr Murati added.
The 34-year-old explained how they regularly eliminate certain data to ensure that users cannot generate images of public figures. “We do not allow generation of public figures. So we will go into the data set and eliminate certain data. That’s the first step – looking at the training data of the model and auditing it, making interventions to avoid certain outcomes,” she said.
“Later, we will look at applying filters, so that when you put in a prompt, it won’t generate things that contain violence or hate,” Mira Murati added.
A series of deepfake videos on social media with morphed faces of actors Rashmika Mandanna, Katrina Kaif, and Kajol have sparked concern about the misuse of AI. Several voices in the film industry, including legendary actor Amitabh Bachchan, have called for legal action.
The government last week issued an advisory to social media platforms underlining the legal provisions that cover such deepfakes and the penalties their creation and circulation may attract.
(With inputs from agencies)