Cloud Acceleration Engineer Intern

DPU & AI Infra

Posted on 9/26/2025

ByteDance

ByteDance

No salary listed

San Jose, CA, USA

In Person

About the Team The ByteDance DPU (Data Processing Unit) team is building the foundational computing infrastructure for ByteDance and Volcano Engine Public Cloud. Our mission is to advance the architecture, development, and research of next-generation software-hardware technologies across compute, networking, and storage for cloud and AI computing. Our technology stack spans - Cloud virtualization & hypervisors - High-performance user-space network protocols (DPDK, RDMA, etc.) - High-speed interconnect and virtual switching - Distributed storage acceleration - GPU virtualization and scheduling for AI/ML workloads We work at the intersection of software systems, distributed infrastructure, and custom hardware acceleration, shaping the next wave of cloud-scale computing. We are looking for talented individuals to join us for an internship in 2026. PhD Internships at ByteDance aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Design and develop DPU network software with a focus on high performance, low latency, and reliability. - Collaborate with hardware teams to build software-hardware co-design solutions for networking and storage acceleration. - Explore AI/ML infrastructure acceleration, leveraging DPUs, GPUs, and custom hardware to optimize distributed training and inference. - Drive end-to-end performance optimization, from OS kernels and drivers to user-space runtime systems. - Contribute to architecture design, technical proposals, and long-term research directions. Minimum Qualifications - Ph.D. in related fields with research training and publications. - Able to commit to working for 12 weeks during Summer 2026 - Proficiency in C/C++ development and debugging. - Strong Linux systems development experience. - Solid understanding of compute, network architecture, and operating systems. - Background in at least one of: software-hardware co-design, distributed systems, high-performance networking, or AI/ML systems. - Must obtain work authorization in the country of employment at the time of hire, and maintain ongoing work authorization during employment. Preferred Qualifications - Intent to return to degree-program after the completion of the internship - Demonstrated software engineering experience from previous internship, work experience, coding competitions, or publications - High levels of creativity and quick problem-solving capabilities - Proven experience designing and building AI/ML infrastructure related but not limited to inference kv cache system, data preprocessing system. By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy