Scalable Computational Frameworks for High-Dimensional and Streaming Data Processing Using Distributed Machine Learning and Edge Computing Architectures

Authors

  • S. E. Melnikova Robotics Scientist, Russia Author

Keywords:

Scalable computing, high-dimensional data, streaming data, distributed machine learning, edge computing, federated learning, parallel processing

Abstract

The proliferation of high-dimensional and streaming data has necessitated the development of scalable computational frameworks that leverage distributed machine learning (DML) and edge computing. Traditional centralized systems struggle with real-time processing requirements, creating bottlenecks in data-intensive applications. This paper reviews state of-the-art scalable computational techniques, emphasizing distributed architectures, parallel processing, federated learning, and edge computing. The study discusses various models and methodologies for managing computational efficiency while reducing latency and resource consumption. Additionally, it provides a comparative analysis of different frameworks used for large-scale data processing. Finally, future directions in optimizing high-dimensional and streaming data processing are explored.

References

Dean, Jeffrey, et al. "Large Scale Distributed Deep Networks." Advances in Neural Information Processing Systems, 2012.

Li, Tian, et al. "Federated Learning: Challenges, Methods, and Future Directions." IEEE Signal Processing Magazine, vol. 37, no. 3, 2020.

Abadi, Martın, et al. "TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems." arXiv preprint arXiv:1603.04467, 2016.

Zaharia, Matei, et al. "Apache Spark: A Unified Engine for Big Data Processing." Communications of the ACM, vol. 59, no. 11, 2016.

McMahan, Brendan, et al. "Communication-Efficient Learning of Deep Networks from Decentralized Data." Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017.

Shi, Weisong, et al. "Edge Computing: Vision and Challenges." IEEE Internet of Things Journal, vol. 3, no. 5, 2016.

Zhang, Kai, et al. "Security and Privacy in Federated Learning: A Survey." IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 2, 2022.

Han, Song, et al. "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding." arXiv preprint arXiv:1510.00149, 2015.

Hinton, Geoffrey, et al. "Distilling the Knowledge in a Neural Network." arXiv preprint arXiv:1503.02531, 2015.

Ghosh, Amrita, et al. "Edge Intelligence for Wireless Networks: A Survey." IEEE Communications Surveys & Tutorials, vol. 22, no. 2, 2020.

Simonyan, Karen, and Andrew Zisserman. "Very Deep Convolutional Networks for Large-Scale Image Recognition." arXiv preprint arXiv:1409.1556, 2014.

Krizhevsky, Alex, et al. "ImageNet Classification with Deep Convolutional Neural Networks." Advances in Neural Information Processing Systems, 2012.

Li, Ke, and Jitendra Malik. "Implicit Maximum Likelihood Estimation." arXiv preprint arXiv:1809.09087, 2018.

Downloads

Published

2025-02-21