TAIMTAQ: A High-Performance Transformer Accelerator Chip for Edge Co mputing
Since the creation of Transformer model architecture by Google, many res
earch teams quickly discovered its amazing versatility, and Transformer m
odels have since made great breakthroughs in the fields of natural langua
ge processing and computer vision. Popular models such as ChatGPT, the
GPT series, Stable Diffusion, and ViT are all built upon the backbone of Tra
nsformer.
But all these milestones require the support of massive capital investmen
t, especially in the field of language model, which grows constantly in size
to enhance functionality. Many major research organizations, such as Ope
nAI, Google, and Meta, are already preparing for the trillion-size LLM mod
el. One of the most difficult issue for research and commercial application
is the construction of large computation center. Advanced GPUs are very e
xpensive, and building a cloud center with ten thousand nodes would req
uire more than $100 million, not to mention the constant energy consump
tion. There needs to be more hardware development to meet this deman
d.
Moved by vision and the current conundrum, our team spent many years
studying the Transformer architecture. Applying our expertise in hardware
design, we have developed a much more fine-grained matrix operation, al
ong with many specialized computation hardware, and putting it all toget
her as a general accelerator architecture, TAIMTAQ. Hardware accelerator
has been a very active research field. Research team in Seoul University pr
oposed the A3 structure in 2020, while Massachusetts Institute of Technol
ogy brought forth SpAtten in 2021. After our endeavoring effort for impro
vement, our TAIMTAQ architecture has surpassed these international base
lines in computation power, chip area, and power consumption.
TAIMTAQ can support generative language model, object detection mode
l, and image segmentation model, as long as they're built with Transfor
mer. Strong and economical edge AI chips such as TAIMTAQ are the key co
mponents of democratizi
National Yang Ming Chiao Tung University (NYCU) was formed in 2021 through the merger of National Yang Ming University and National Chiao Tung University. Located in Hsinchu, Taiwan, NYCU is a leading institution specializing in technology, engineering, medicine, and social sciences. The university is known for its strengths in research and innovation, particularly in areas such as information technology, biomedicine, and artificial intelligence. NYCU fosters interdisciplinary collaboration, global partnerships, and aims to nurture professionals with strong academic foundations and leadership skills to address societal challenges and contribute to technological advancements.
High-performance LiDAR systems are set to bridge the gap between low resolution, long-range radar and high-resolution, 2D cameras
Algorithm-Driven Design: Using new algorithm and deep learning model to control the microstructures generation of favorable mechanical behavior (Static & Dynamic) in additive manufacturing.
Intelligent Energy Management and Power Regulation Technique for Microgrid with Optimization of Power Generation, Storage, and Consumption
"1) Integrating AI recognition, IoT, and blockchain into traceable software and hardware for recycling UCO 2) Cutting-edge 3D learning platform powered by Spatial AI 3) Optimize use of existing buildings by making them transformable and adaptive to human needs at the click of a button 4) Non-contact image analysis and calculation technology to capture vital signs through dynamic face detection"
Technology maturity:Experiment stage
Exhibiting purpose:Display of scientific results
Trading preferences:Negotiate by self
Coming soon!