Coming Soon
Coming Soon!
Coming Soon!
Quantum computing is a new paradigm that exploits the fundamental principles of quantum mechanics, such as superposition and entanglement, to tackle problems in mathematics, chemistry and material science that are well beyond the reach of supercomputers. Despite the intensive worldwide race to build a useful quantum computer, it is conservatively projected to take decades before reaching the state of useful quantum supremacy. The main challenge is that qubits operate at the atomic level, thus are extremely fragile, and difficult to control and read out. The current commercial state-of-the-art implements a few dozen superconducting qubits in a highly specialized technology and cools them down to a few tens of millikelvin. The high cost of cryogenic cooling and bulky wiring prevents its widespread use. A companion classical electronic controller, needed to control and read out the qubits, is mostly realized with room-temperature laboratory instrumentation. The recent progress with integrating the latter on a CMOS chip and placing it at the 4-kelvin stage within the same cryogenic chamber has made a tremendous progress towards the total system integration, but still it does not even start addressing the basic question of how to scale up to the thousands or millions of qubits needed for practical quantum algorithms.
Coming Soon!
Large Language Models (LLMs) have demonstrated exceptional performance across numerous generative AI applications, but require large model parameter sizes. These parameters range from several billion to trillions, leading to significant computational demands for both AI training and inference. The growth rate of these computational requirements significantly outpaces advancements in semiconductor process technology. Consequently, innovative IC and system design techniques are essential to address challenges related to computing power, memory, bandwidth, energy consumption, and thermal management to meet AI computing needs.
In this talk, we will explore the evolution of LLMs in the generative AI era and their influence on AI computing design trends. For AI computing in data centers, both scale-up and scale-out strategies are employed to deliver the huge computational power required by LLMs. Conversely, even smaller LLM models for edge devices demand more resources than previous generations without LLMs. Moreover, edge devices may also act as orchestrators in device-cloud collaboration. These emerging trends will significantly shape the design of future computing architectures and influence the advancement of circuit and system designs.
The evolution of satellite communication systems is increasingly driven by the need for high-performance, flexible, and efficient solutions, particularly in the design and implementation of circuits and systems for next-generation satellite payloads. Software defined payloads leveraging on On-Board Digital Signal Processing (OBP), Multi-Beam Antennas (MBAs) and Beam-Forming Networks (BFNs) are pivotal technologies in addressing the challenges posed by emerging communication standards, such as 5G/6G, as well as advanced satellite communication and remote sensing systems.