Project Description
This project develops a heterogeneous GPU–FPGA neural digital predistortion (DPD) system for AI-native radio units. The system splits the DPD workload across two platforms: an NVIDIA GPU handles online model training and weight updates (ms-scale loop), while a Xilinx RFSoC (ZCU670) runs real-time DPD inference at ns-scale latency. The student will co-design the neural DPD model and the heterogeneous runtime: incorporate neural network quantization and pruning to reduce compute and memory traffic; build a GPU-based training pipeline for streaming I/Q data; and deploy the trained model onto FPGAs. The system will be validated using our in-house RF measurement setup.
Preferred Skills
Python & PyTorch (required)
CUDA / TensorRT / GPU profiling (nice to have)
FPGA design with Vivado / Vitis (nice to have)
Signal processing and basic RF concepts (nice to have)
Supervision
Supervisor: Dr. Chang Gao (TU Delft, Chang.Gao@tudelft.nl)
Co-supervisor: Dr. Chris Dick (NVIDIA)
As an MSc student, you will have weekly meetings with Dr. Chang Gao and Dr. Chris Dick.
References
[2] OpenDPD Github Repo: https://github.com/lab-emi/OpenDPD
[3] OpenDPDv2: A Unified Learning and Optimization Framework for Neural Network Digital Predistortion
Project Description:
This project develops an LLM-based design assistant that translates natural-language instructions into ready-to-use passive RF building blocks (inductors, transformers, matching networks) for millimeter-wave power amplifiers. You will build an AI agent that understands designer intent, orchestrates parametric PCell generation, and drives automated DRC/LVS checking and EM/Spectre verification, from a chat prompt to a validated layout.
You will have freedom to innovate on LLM tool-use mechanisms, retrieval-augmented generation over RF design knowledge, multi-objective ranking strategies, and closed-loop agent workflows with simulation feedback. Students motivated to publish at top-tier venues (e.g., IEEE IMS, TCAD, DAC) are highly welcome.
Preferred Skills:
Python (experience with LLM APIs/frameworks is a strong plus)
Familiarity with RF passive components and basic IC layout concepts
Deliverables:
A working LLM-based design assistant coordinating the full passive design flow.
End-to-end demonstration: natural-language specs → PCell generation → EM + Spectre validation.
A high-quality manuscript for submission to a top-tier venue.
What we offer:
The student will receive a paid internship at NXP Semiconductors, working alongside experienced RF IC designers with industry-standard EDA tools and cutting-edge mm-wave design flows.
How to apply:
Interested students, please contact Dr. Chang Gao (Chang.Gao@tudelft.nl) by 30 April 2026. Candidate screening will take place in May.
Supervision: Chang Gao (TU Delft), Masoud Pashaeifar (NXP), Mark van der Heijden (NXP)
Project Description
Analog IC design is traditionally a labor-intensive, expert-driven process. This project investigates whether state-of-the-art coding agents — Claude Code (Anthropic) and Codex (OpenAI) — can automate the full design loop of a SAR ADC, from natural-language specifications to a verified schematic.
The student will build agent workflows that: (1) translate specs into topology and sizing, (2) generate and run simulation testbenches (Cadence Spectre / Xyce / ngspice), (3) interpret results (DNL, INL, SNDR, power), and (4) iterate autonomously toward design closure. A central research question is how well current LLM agents handle the continuous, multi-objective nature of analog design, and where they fall short of human experts. There is broad freedom to innovate on agent architectures, tool-use strategies, and feedback loops. Students aiming to publish at top-tier venues (DAC, TCAD) are especially welcome.
Preferred Skills Python (LLM APIs / agent frameworks a plus); analog IC design fundamentals; familiarity with Spectre, Xyce, or ngspice; basic knowledge of layout and open-source PDKs (e.g., SKY130).
Deliverables
A working agentic design assistant that orchestrates the SAR ADC flow from spec to verified schematic.
A quantitative comparison of Claude Code vs. Codex (closure rate, simulation accuracy, iteration count, wall-clock time).
End-to-end demonstration: NL specs → topology → sizing → simulation → closure.
A manuscript targeting a top-tier venue.
References Claude Code docs; OpenAI Codex; SKY130 PDK; Xyce; Razavi, Design of Analog CMOS Integrated Circuits.
Supervision Dr. Chang Gao (Chang.Gao@tudelft.nl) and Dr. Qinwen Fan (Q.Fan@tudelft.nl), Department of Microelectronics, TU Delft.
How to Apply Send CV and transcripts to Dr. Chang Gao.
Project Description:
This project builds an energy-efficient FPGA-based accelerator for real-time voice cloning. We will start from the open-source pipeline Real-Time-Voice-Cloning (GitHub; YouTube demo) and replace the backbone TTS model with EfficientSpeech (paper) to reduce compute and memory costs while preserving naturalness. The student will co-design the model and hardware: apply quantization, pruning, and sparsity/weight-sharing to the encoder/decoder blocks, then map the kernels (mel-spectrogram, attention/FFN, vocoder) to an FPGA data path. We will prototype on the Avnet MiniZed board (MiniZed), targeting end-to-end latency, intelligibility (WER), MOS-style quality metrics, and performance-per-watt.
Ethics & consent: all cloning experiments must use voices with explicit written consent and include a “cloned audio” watermark.
Preferred Skills:
FPGA design (Verilog/SystemVerilog), toolflows (Vivado/Vitis)
Python & PyTorch
Deliverables:
Compressed EfficientSpeech-based TTS model integrated into the Real-Time-Voice-Cloning pipeline
FPGA accelerator on MiniZed with real-time inference (encoder/decoder + vocoder offload)
Reference Material:
Real-Time-Voice-Cloning: https://github.com/CorentinJ/Real-Time-Voice-Cloning
Demo video: https://www.youtube.com/watch?v=-O_hYhToKoA
EfficientSpeech paper: https://arxiv.org/abs/2305.13905
Avnet MiniZed board: https://www.avnet.com/americas/products/avnet-boards/avnet-board-families/minized
Contact Person:
Dr. Chang Gao (Chang.Gao@tudelft.nl)
This project investigates the integration of neural networks into the signal acquisition pipeline to enhance the performance of low-resolution, low-power analog-to-digital converters (ADCs). Traditional ADCs are designed to maximize signal fidelity across a wide range of applications, but this project takes a domain-adaptive approach by pairing simplified ADC front-ends with task-specific neural post-processing to reconstruct or enhance the acquired signals.
The student will work on a prototype system built around a commercial off-the-shelf ADC (e.g., from Texas Instruments) on a PCB, and implement neural enhancement models in PyTorch to process the raw ADC outputs. The project will include hardware-in-the-loop training and inference, with optional deployment on embedded platforms such as Raspberry Pi or FPGAs. Emphasis will be placed on evaluating power, latency, and accuracy trade-offs in an end-to-end pipeline.
Preferred Skills: PyTorch, Signal Processing Fundamentals, Embedded Systems, PCB Prototyping, Verilog (optional for hardware acceleration)
Contact Person: Dr. Chang Gao (Chang.Gao@tudelft.nl)
This project aims to develop an AI-driven calibration system for phase-locked loop (PLL) non-idealities, such as jitter, phase noise, and locking instability, by adapting the OpenDPD framework, traditionally used for power amplifier linearization. Leveraging MATLAB for system-level PLL modeling and PyTorch for training neural networks, the project will automate the identification and compensation of non-linear behaviors in mixed-signal circuits. The deliverable: An open-source simulation package enabling AI-calibrated PLLs for high-precision communication systems.
Preferred Skills: MATLAB/Simulink, PyTorch, RF system fundamentals.
Contact Person: Dr. Masoud Babaie (M.Babaie@tudelft.nl) and Dr. Chang Gao (Chang.Gao@tudelft.nl)
Project Description:
This project focuses on leveraging AI-driven automation to co-design neural network accelerators for real-time edge applications. The goal is to integrate software optimization techniques such as quantization and pruning with hardware-aware neural network training to develop an energy-efficient accelerator on FPGA and ASIC platforms. By automating the end-to-end design process using reinforcement learning and AI-driven reasoning, the project aims to bridge the gap between deep learning frameworks and hardware implementation.
Preferred Skills:
Deep Learning frameworks (PyTorch, TensorFlow)
Hardware Description Languages (Verilog/SystemVerilog)
FPGA/ASIC design flow (Vivado, OpenLANE, Cadence)
Contact Person: MSc Ang Li (ang.li@tudelft.nl) and Dr. Chang Gao (Chang.Gao@tudelft.nl)