Abstract
The rapid growth in data-centric applications—such as video analytics, IoT, and real-time communications—demands hardware solutions capable of processing vast amounts of data with minimal delay. High-throughput Very Large Scale Integration (VLSI) architectures have emerged as crucial enablers in fulfilling real-time streaming requirements. This paper explores the architectural techniques and design methodologies employed to achieve high throughput and low latency in VLSI systems. It evaluates pipelining, parallelism, and hybrid approaches in terms of performance, power efficiency, and silicon cost. A performance comparison chart illustrates the trade-offs among key architectural paradigms.
All articles published in the American Journal of Embedded Systems and VLSI Design (AJESVD) are licensed under the Creative Commons Attribution 4.0 International License (CC BY 4.0). This license allows:
-
Sharing – copying and redistributing the material in any medium or format
-
Adaptation – remixing, transforming, and building upon the material for any purpose, even commercially
Under the following terms:
-
Attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner but not in any way that suggests the licensor endorses you or your use.
By submitting a manuscript to AJESVD, authors agree to make their work freely available under the terms of the CC BY 4.0 license and affirm that their submission is original and does not infringe on any third-party rights.