Software Engineer Intern
Inference Infrastructure
Posted on 10/4/2025

ByteDance
No salary listed
Seattle, WA, USA
In Person
About the Team
The Inference Infrastructure team is the creator and open-source maintainer of AIBrix, a Kubernetes-native control plane for large-scale LLM inference. We are part of ByteDance’s Core Compute Infrastructure organization, responsible for designing and operating the platforms that power microservices, big data, distributed storage, machine learning training and inference, and edge computing across multi-cloud and global datacenters.
With ByteDance’s rapidly growing businesses and a global fleet of machines running hundreds of millions of containers daily, we are building the next generation of cloud-native, GPU-optimized orchestration systems. Our mission is to deliver infrastructure that is highly performant, massively scalable, cost-efficient, and easy to use—enabling both internal and external developers to bring AI workloads from research to production at scale.
We are expanding our focus on LLM inference infrastructure to support new AI workloads, and are looking for engineers passionate about cloud-native systems, scheduling, and GPU acceleration. You’ll work in a hyper-scale environment, collaborate with world-class engineers, contribute to the open-source community, and help shape the future of AI inference infrastructure globally.
We are looking for talented individuals to join us for an internship in 2026. Internships at ByteDance aim to offer students industry exposure and hands-on experience. Watch your ambitions become reality as your inspiration brings infinite opportunities at ByteDance
Internships at ByteDance aim to provide students with hands-on experience in developing fundamental skills and exploring potential career paths. A vibrant blend of social events and enriching development workshops will be available for you to explore. Here, you will utilize your knowledge in real-world scenarios while laying a strong foundation for personal and professional growth. It runs for 12 weeks.
Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to ByteDance and its affiliates' jobs globally. Applications will be reviewed on a rolling basis. We encourage you to apply as early as possible. Please state your availability clearly in your resume (Start date, End date).
Summer Start Dates:
- May 11th, 2026
- May 18th, 2026
- May 26th, 2026
- June 8th, 2026
- June 22nd, 2026
Responsibilities
- Design and build large-scale, container-based cluster management and orchestration systems with extreme performance, scalability, and resilience.
- Architect next-generation cloud-native GPU and AI accelerator infrastructure to deliver cost-efficient and secure ML platforms.
- Collaborate across teams to deliver world-class inference solutions using vLLM, SGLang, TensorRT-LLM, and other LLM engines.
- Stay current with the latest advances in open source (Kubernetes, Ray, etc.), AI/ML and LLM infrastructure, and systems research; integrate best practices into production systems.
- Write high-quality, production-ready code that is maintainable, testable, and scalable.
Minimum Qualifications
- B.S./M.S. in Computer Science, Computer Engineering, or related fields with 2+ years of relevant experience
- Able to commit to working for 12 weeks during Summer 2026
- Strong understanding of large model inference, distributed and parallel systems, and/or high-performance networking systems.
- Hands-on experience building cloud or ML infrastructure in areas such as resource management, scheduling, request routing, monitoring, or orchestration.
- Solid knowledge of container and orchestration technologies (Docker, Kubernetes).
- Proficiency in at least one major programming language (Go, Rust, Python, or C++).
Preferred Qualifications
- Experience contributing to or operating large-scale cluster management systems (e.g., Kubernetes, Ray).
- Experience with workload scheduling, GPU orchestration, scaling, and isolation in production environments.
- Hands-on experience with GPU programming (CUDA) or inference engines (vLLM, SGLang, TensorRT-LLM).
- Familiarity with public cloud providers (AWS, Azure, GCP) and their ML platforms (SageMaker, Azure ML, Vertex AI).
- Strong knowledge of ML systems (Ray, DeepSpeed, PyTorch) and distributed training/inference platforms.
- Excellent communication skills and ability to collaborate across global, cross-functional teams.
- Passion for system efficiency, performance optimization, and open-source innovation.
By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy

Internships by Season
Summer InternshipsFall InternshipsWinter & Spring InternshipsCo-op InternshipsLatest InternshipsInternship Search Guides
How to Find an InternshipInternship SalariesInternship DeadlinesMock Interview Prep