Intel
AI Engineer Intern
- Engineered scalable and efficient AI models for real-world use cases, leveraging advanced deep learning techniques and large-scale datasets to improve prediction accuracy and performance across multiple internal tools.
- Built, fine-tuned, and deployed custom transformer-based models for tasks like document classification, summarization, and natural language understanding, achieving up to 35% improvement in model inference time.
- Collaborated closely with research and MLOps teams to streamline model deployment workflows using ONNX, TensorRT, and Intel® Optimization Libraries, significantly reducing latency and compute resource consumption.
- Contributed to the development of an internal AI Model Zoo with reusable components and experiment tracking using Weights & Biases, enhancing reproducibility and model management.
- Worked on AI benchmarking across Intel hardware, ensuring performance compatibility and providing optimization reports for future model scaling and edge deployment.
- Participated in regular cross-team standups and code reviews, contributing 30+ hours weekly to research, development, and documentation.
- Practiced Agile methodologies for sprint planning and task management using JIRA, while closely collaborating with senior AI engineers and data scientists.