Neural Architecture Search — a compact, categorized walkthrough
Neural Architecture Search (NAS) automates design of neural network architectures, trading manual
trial-and-error for systematic search. Below is a curated, categorized list of influential papers,
frameworks, benchmarks, and practical notes — each linked to a stable online source where possible.
Use this as a jumping-off point for survey reading, trying out code, or building hardware-aware NAS flows.
A compact, reproducible benchmark across multiple datasets (CIFAR-10/100, ImageNet-16-120) designed for fast algorithm evaluation and ablation studies.
Adapts NAS ideas to transformer architectures and sequence tasks, focusing on block-level search and pruning for efficiency.
Quick Takeaways
Choose your search strategy by compute budget: RL/evolution often produce strong results but can be expensive; parameter sharing, surrogate models, and differentiable methods reduce cost.
Benchmarks matter: Use NAS-Bench datasets and standardized pipelines for reproducible comparisons and ablations.
Hardware must be in the loop: For deployment, include latency/power/size as explicit objectives or use hardware-aware layers like ProxylessNAS/FBNet/OFA flows.
One-shot & supernet approaches: Offer dramatic speed-ups (train once, derive many subnets) but require careful fairness and evaluation strategies to avoid bias.
Practical tip: start with constrained search spaces (cell-based, channel-level) and cheap proxies (smaller datasets, early stopping) before scaling up.