A comprehensive AI-driven research program for fully automated analog circuit design — from performance specification to manufacturable layout — spanning topology selection, parameter optimization, and physical synthesis.
Building a comprehensive AI ecosystem that automates every stage of analog circuit design — from specification to manufacturable silicon.
Designing analog and RF circuits remains one of the most expertise-intensive and time-consuming tasks in modern electronics development. Unlike digital design, which benefits from mature EDA automation, analog design still relies heavily on manual, iterative workflows guided by years of engineering intuition. Each stage — from selecting the right circuit topology, to tuning component parameters for target performance, to ensuring the design is physically realizable under layout constraints — demands deep domain knowledge and painstaking trial-and-error.
The FALCON research program addresses this fundamental bottleneck by developing a unified AI-driven framework for fully automated, end-to-end analog circuit design. Our work spans the complete design pipeline: foundational datasets and benchmarks for training ML models, supervised learning approaches for circuit parameter inference, a unified framework integrating topology selection with layout-constrained optimization, and EM-aware physical synthesis for translating netlists into manufacturable GDSII layouts. Together, these contributions form a cohesive research ecosystem advancing the state of the art toward truly autonomous analog design automation.
Automatically selecting the optimal circuit architecture from a library of expert-designed topologies based on target performance specifications, guided by human design heuristics and ML classification.
Inferring precise component parameters through differentiable forward models — custom edge-centric graph neural networks that serve as learned surrogates for expensive industrial circuit simulators.
Integrating physical layout constraints — parasitic effects, frequency-dependent EM behavior, and design rule compliance — directly into the optimization loop via differentiable layout cost functions.
Translating optimized circuit netlists into manufacturable GDSII layouts through neural inductor modeling, intelligent P-Cell optimization, and automated placement and routing for RF circuits.
Our core framework unifies topology selection, forward modeling, and layout-aware inference into a single differentiable pipeline — the first to do so at industrial scale.
A lightweight MLP classifier maps target performance specifications to the most suitable circuit topology from a curated library of 20 expert-designed architectures, guided by human design heuristics.
MLP Classifier · >99% AccuracyA custom edge-centric Graph Neural Network serves as a differentiable surrogate for the Cadence Spectre simulator, mapping netlist-derived circuit graphs to 16-dimensional performance vectors.
Edge-Centric GNN · 16-dim OutputGradient-based optimization over the learned GNN recovers design parameters satisfying target specs under a differentiable layout cost capturing parasitic effects, EM behavior, and DRC constraints.
Differentiable Layout · <1s InferenceEvaluated on 1M+ Cadence Spectre-simulated mm-wave circuits across 20 topologies, our research program achieves state-of-the-art results across all design stages.
A systematic research program building toward fully automated analog design — from foundational datasets to complete physical layout generation.
The foundational dataset and benchmark that enabled our research program. Introduces a comprehensive multi-level dataset of 7 core analog/RF circuits and 2 complex wireless transceiver systems, simulated with Cadence Spectre. Evaluates MLPs, Transformers, SVRs, and other ML models for circuit design tasks.
A comprehensive evaluation of supervised ML approaches for designing circuit parameters from performance specifications. Benchmarks diverse models from transformers to random forests, revealing that simpler circuits (LNAs) achieve 0.3% mean relative error while complex circuits (PAs, VCOs) benefit from deeper architectures.
Flagship
2025
The unified ML framework that brings it all together. FALCON integrates performance-driven topology selection, edge-centric GNN forward modeling, and gradient-based layout-aware parameter inference into a single end-to-end pipeline. Trained on 1M+ Cadence Spectre-simulated mm-wave circuits across 20 topologies, achieving >99% topology accuracy, <10% prediction error, and sub-second design time.
Extends the FALCON vision from schematic-level to complete physical layout generation. Features a neural inductor model with EM-accurate predictions across 1–100 GHz, intelligent P-Cell optimization for DRC compliance, and a complete placement and routing engine — enabling full netlist-to-GDSII automation for 22-nm CMOS RF circuits.
Our research is powered by large-scale, high-fidelity analog circuit datasets generated using industrial Cadence Spectre simulations, enabling reproducible AI-driven circuit design research.
The largest foundry-grade analog circuit dataset to date. Contains over 1,000,000 Cadence Spectre-simulated circuit instances across 20 expert-designed topologies, with process fidelity, noise characteristics, and layout-dependent behavior of foundry-calibrated flows — far surpassing symbolic SPICE alternatives.
Spanning core analog/RF building blocks
The foundational dataset and benchmark that enabled the FALCON research program. AICircuit introduces a comprehensive multi-level dataset of 7 core analog/RF circuits and 2 complex wireless transceiver systems, simulated with Cadence Spectre. It evaluates MLPs, Transformers, SVRs, and other ML models for circuit design tasks — establishing rigorous baselines for AI-driven analog design.
From individual components to complete transceiver systems
If you find our research useful, please consider citing the relevant papers.
@inproceedings{Mehradfar2025FALCON, title = {{FALCON}: An {ML} Framework for Fully Automated Layout-Constrained Analog Circuit Design}, author = {Mehradfar, Asal and Zhao, Xuzhe and Huang, Yilun and Ceyani, Emir and Yang, Yankai and Han, Shihao and Aghasi, Hamidreza and Avestimehr, Salman}, booktitle = {The Thirty-ninth Annual Conference on Neural Information Processing Systems}, year = {2025} }
@inproceedings{Mehradfar2024AICircuit, title = {{AICircuit}: A Multi-Level Dataset and Benchmark for {AI}-Driven Analog Integrated Circuit Design}, author = {Mehradfar, Asal and Zhao, Xuzhe and Niu, Yue and Babakniya, Sara and Alesheikh, Mahdi and Aghasi, Hamidreza and Avestimehr, Salman}, booktitle = {Machine Learning and the Physical Sciences Workshop at NeurIPS 2024}, year = {2024} }
@article{Mehradfar2025Supervised, title = {Supervised Learning for Analog and {RF} Circuit Design: Benchmarks and Comparative Insights}, author = {Mehradfar, Asal and Zhao, Xuzhe and Niu, Yue and Babakniya, Sara and Alesheikh, Mahdi and Aghasi, Hamidreza and Avestimehr, Salman}, journal = {arXiv preprint arXiv:2501.11839}, year = {2025} }
@inproceedings{Huang2026EMAware, title = {{EM}-Aware Physical Synthesis: Neural Inductor Modeling and Intelligent Placement & Routing for {RF} Circuits}, author = {Huang, Yilun and Mehradfar, Asal and Avestimehr, Salman and Aghasi, Hamidreza}, booktitle = {IEEE International Symposium on Circuits and Systems (ISCAS)}, year = {2026} }
A collaborative effort between the University of Southern California and the University of California, Irvine.






For questions, collaborations, or inquiries about FALCON: