Ahmed Menshawy
Vice President of AI Engineering, Mastercard
Ahmed Menshawy is the Vice President of AI Engineering at Mastercard. In this role, he leads the AI Engineering team, driving the development and operationalization of AI products and addressing the broad range of challenges and technical debts surrounding ML pipelines deployment. Ahmed also leads a team dedicated to creating a number of AI accelerators and capabilities, including Serving engines and Feature stores, aimed at enhancing various aspects of AI engineering Ahmed is the co-author of "Deep Learning with TensorFlow" and the author of "Deep Learning by Example," focusing on advanced topics in deep learning. He is also collaborating on an upcoming O'Reilly book, "Graph Learning for the Enterprise," which aims to guide enterprises in efficiently training and deploying graph learning pipelines at scale
Connect on:
Upcoming Sessions
A.I. Summit 2024
March, 13, 2025
16:15 -
16:45 EEST
30 MINS
How the Power of Generative AI is Shaping the Future of Tech
Real ROI: Navigating Challenges and Technical Debt in LLMs Production Deployment
Generative AI have become essential in advancing AI, enabling remarkable capabilities in natural language processing and understanding. However, the efficient deployment of LLMs in production environments reveals a landscape of challenges and technical debt
Ethically, LLMs face issues such as bias amplification, where they might perpetuate existing stereotypes in their outputs. Misinformation is another concern, with the potential misuse of LLMs to create convincing yet false narratives. Privacy risks emerge from LLMs possibly memorizing and revealing personal data. Moreover, societal challenges include the impact on employment, as LLMs could automate tasks but also lead to job displacement. These challenges highlight the need for careful management and ethical considerations in the deployment of LLMs
In this talk, Ahmed will highlight the key challenges and technical debt associated with LLMs' deployment, which demands customization and sophisticated engineering solutions not readily available in broad-use machine learning libraries or inference engines
Ethically, LLMs face issues such as bias amplification, where they might perpetuate existing stereotypes in their outputs. Misinformation is another concern, with the potential misuse of LLMs to create convincing yet false narratives. Privacy risks emerge from LLMs possibly memorizing and revealing personal data. Moreover, societal challenges include the impact on employment, as LLMs could automate tasks but also lead to job displacement. These challenges highlight the need for careful management and ethical considerations in the deployment of LLMs
In this talk, Ahmed will highlight the key challenges and technical debt associated with LLMs' deployment, which demands customization and sophisticated engineering solutions not readily available in broad-use machine learning libraries or inference engines
Share this Session