A Theoretical and Empirical Investigation of High-Dimensional Feature Space Representations in Deep Learning Models for Large-Scale Data Processing
Keywords:
High-dimensional space, deep learning, representation learning, overparameterization, generalization, statistical learning theoryAbstract
High-dimensional feature spaces are intrinsic to modern deep learning architectures, yet their theoretical properties and practical implications remain only partially understood. This study investigates the role and behavior of high-dimensional representations in deep neural networks (DNNs) under large-scale data regimes. We combine theoretical frameworks from statistical learning theory with empirical evaluation on benchmark datasets to examine how deep models utilize and adapt to the curse and blessing of dimensionality. Our findings provide new insights into feature disentanglement, sparsity, generalization, and representation learning, contributing to the understanding of why deep learning performs well even in highly overparameterized contexts.
References
Belkin, Mikhail, Daniel Hsu, Siyuan Ma, and Soumik Mandal. "Reconciling modern machine-learning practice and the classical bias–variance trade-off." Proceedings of the National Academy of Sciences 116.32 (2019): 15849–15854. Print.
Bengio, Yoshua, Aaron Courville, and Pascal Vincent. "Representation learning: A review and new perspectives." IEEE Transactions on Pattern Analysis and Machine Intelligence 35.8 (2013): 1798–1828. Print.
Bellman, Richard. Adaptive Control Processes: A Guided Tour. Princeton: Princeton University Press, 1961. Print.
Zhang, Chiyuan, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. "Understanding deep learning requires rethinking generalization." International Conference on Learning Representations (ICLR). 2017. Print.
Arora, Sanjeev, Simon Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. "On exact computation with an infinitely wide neural net." Advances in Neural Information Processing Systems 32 (2019): 8141–8150. Print.
Neyshabur, Behnam, Srinadh Bhojanapalli, Yann LeCun, and Nathan Srebro. "A PAC-Bayesian approach to spectrally-normalized margin bounds for neural networks." International Conference on Learning Representations (ICLR). 2018. Print.
Bartlett, Peter L., Dylan J. Foster, and Matus Telgarsky. "Spectrally-normalized margin bounds for neural networks." Advances in Neural Information Processing Systems 30 (2017): 6240–6249. Print.
Poggio, Tomaso, Qianli Liao, and Andrzej Banburski. "Theory of deep learning III: explaining the non-overfitting puzzle." arXiv preprint arXiv:1801.00173 (2018). Print.
Shwartz-Ziv, Ravid, and Naftali Tishby. "Opening the black box of deep neural networks via information." arXiv preprint arXiv:1703.00810 (2017). Print.
Alain, Guillaume, and Yoshua Bengio. "Understanding intermediate layers using linear classifier probes." arXiv preprint arXiv:1610.01644 (2016). Print.
Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge: MIT Press, 2016. Print.