Every time a Taiwanese user asks an AI chatbot what to eat for breakfast and receives the answer "hamburger," a small but telling distortion occurs. The AI is not malfunctioning — it is simply reflecting the cultural assumptions embedded in the models that power it.
That observation, offered by Chien Lee-feng (簡立峰), former Managing Director of Google Taiwan and current independent board director at AI startup Appier, cuts to the heart of a structural debate over AI sovereignty. Speaking at a press briefing on Wednesday, March 25, Chien argued that Taiwan must treat AI infrastructure as a matter of national resilience, not merely technological convenience.
(Related:
Taiwan Is Running Out of People. A Former Google Taiwan Chief Says It Must Go Global
|
Latest
)
How AI Models Shape — and Distort — Local Decision-Making
Of the roughly 7,000 languages spoken worldwide, AI systems have been meaningfully optimized for only a handful, Chien noted. More than 6,000 languages remain inadequately served.
The problem, however, goes beyond language coverage. Even when a user queries an AI in Traditional Chinese, the system converts that input into a token sequence — a symbolic representation — before running its computations on a model built primarily around English-language data. The result, analysts argue, is that the answer a Taiwanese user receives may be technically in Chinese but culturally foreign.
Chien illustrated this with a pointed example: ask a mainstream AI chatbot "When is National Day?" and it may return October 1 — the date of the People's Republic of China's National Day — rather than October 10, Taiwan's National Day. The correct query, he argued, is "When is the Republic of China National Day?" Users who do not know to frame questions this way risk receiving systematically skewed answers.
"You think you're asking in Traditional Chinese," Chien said, "but the model's origin determines the answer. You forget there are other options."


















































