Home Exhibits Exhibit Search

TAIMTAQ: A High-Performance Transformer Accelerator Chip for Edge Co mputing

Back

TAIMTAQ: A High-Performance Transformer Accelerator Chip for Edge Co mputing

Since the creation of Transformer model architecture by Google, many res
earch teams quickly discovered its amazing versatility, and Transformer m
odels have since made great breakthroughs in the fields of natural langua
ge processing and computer vision. Popular models such as ChatGPT, the
GPT series, Stable Diffusion, and ViT are all built upon the backbone of Tra
nsformer.
But all these milestones require the support of massive capital investmen
t, especially in the field of language model, which grows constantly in size
to enhance functionality. Many major research organizations, such as Ope
nAI, Google, and Meta, are already preparing for the trillion-size LLM mod
el. One of the most difficult issue for research and commercial application
is the construction of large computation center. Advanced GPUs are very e
xpensive, and building a cloud center with ten thousand nodes would req
uire more than $100 million, not to mention the constant energy consump
tion. There needs to be more hardware development to meet this deman
d.
Moved by vision and the current conundrum, our team spent many years
studying the Transformer architecture. Applying our expertise in hardware
design, we have developed a much more fine-grained matrix operation, al
ong with many specialized computation hardware, and putting it all toget

her as a general accelerator architecture, TAIMTAQ. Hardware accelerator
has been a very active research field. Research team in Seoul University pr
oposed the A3 structure in 2020, while Massachusetts Institute of Technol
ogy brought forth SpAtten in 2021. After our endeavoring effort for impro
vement, our TAIMTAQ architecture has surpassed these international base
lines in computation power, chip area, and power consumption.
TAIMTAQ can support generative language model, object detection mode
l, and image segmentation model, as long as they're built with Transfor
mer. Strong and economical edge AI chips such as TAIMTAQ are the key co
mponents of democratizi

National Yang Ming Chiao Tung University

National Yang Ming Chiao Tung University (NYCU) was formed in 2021 through the merger of National Yang Ming University and National Chiao Tung University. Located in Hsinchu, Taiwan, NYCU is a leading institution specializing in technology, engineering, medicine, and social sciences. The university is known for its strengths in research and innovation, particularly in areas such as information technology, biomedicine, and artificial intelligence. NYCU fosters interdisciplinary collaboration, global partnerships, and aims to nurture professionals with strong academic foundations and leadership skills to address societal challenges and contribute to technological advancements.

Contact

  • Name:

  • Phone:

  • Address:No. 75, Boai Street, Hsinchu 300, Taiwan, ROC

Email

Other Information

  • Pavilion:Future Tech AIoT & Smart Applications FB02

  • Affiliated Ministry:National Science and Technology Council

  • Application Field:Information & Communications、Machinery & System、Life Application

Location More info
  • Technology maturity:Experiment stage

  • Exhibiting purpose:Display of scientific results

  • Trading preferences:Negotiate by self

Inquiry

*Employer

*Name

*Email

*Request & Comments

Request Specifications

Inquiry

*Employer

*Name

*Email

*Request & Comments

Request Specifications

Coming soon!

TOP

Login

Account

Password