To keep Taiwan's industry, government, academia, and research institutions ahead of the global technology and industry trends amid this wave of transformation, the Taiwan Science and Technology Hub (Taiwan S&T Hub) has invited internationally renowned AI expert Dr. Fei-Fei Li, current director of Stanford's Human-Centered AI Institute, Chief Scientist and Researcher of ImageNet, and co-founder and chairperson of AI4ALL, to participate in the "What we see and what we value: AI with a human perspective" roundtable forum held on March 23. Together with local experts, they discussed how to make AI a key driving force for the betterment of humanity. The forum was moderated by Erica Lu, a social affairs consultant at the Business Next, and featured guests including the Pegatron Corporation Chairman Tung Tzu-Hsien, PChome CEO Alice Chang, and iKala co-founder and CEO Sega Cheng, among other industry experts.
The following article is based on the insights shared by iKala co-founder and CEO Sega Cheng during the roundtable discussion (Part 2).
In the audience, we have a high school student who would like to ask: Nowadays, we encourage everyone to learn about programming and software, and even to write code. In the age of AI, to what extent should we learn AI? Should we focus on learning AI tools, understanding the underlying principles, or actually developing a model?
I have an eight-year-old daughter myself, so I have been considering whether children should learn programming languages at an early age. Let me first share my thoughts on programming languages. I believe that there is no need to invest too early in learning them, and we don't need to turn it into a nationwide movement.
We should look at this issue of children learning programming languages from two perspectives. The first one is from the perspective of AI. There is a research direction focused on redesigning computer systems. Computer systems have been developed with layers of abstraction, which is primarily for human convenience. As a result,it brought the use of many intermediate languages, followed by programming languages and natural languages. However, AI does not require these intermediate parts. So, my expectation is that the future research direction of programming languages will involve AI designing highly efficient programming languages, potentially new ones. Programming languages will evolve faster than before and the choices we make now will be completely different from those we make ten years from now. AI will likely accelerate the iteration of programming languages, resulting in the most efficient programming language in human history. It won't be too late to start learning it then.
The second perspective is that our interaction with AI has already reached the level of natural language. I often hear language educators say, "Oh no! Language institutions won't be needed anymore since AI can be a teacher.", but I disagree. Natural language skills will only become more critical. Not only do you need to communicate with ChatGPT and design good prompts, but you also need to be able to guide the correct answers and ask the right questions. This skill is not only crucial for communication with humans but also with AI. AI contains vast knowledge waiting to be accessed, but you have to ask the right questions to obtain wisdom. So, I think natural language is even more important. When it comes to education, I believe that natural language is more significant than programming languages. For humans, language is not just a skill; learning an additional language means acquiring a new way of thinking, which is something AI cannot replace. As a result, a person who can speak multiple languages can have more critical thinking, diverse perspectives, and innovative combinations. Innovation essentially involves connecting distant ideas, and humans have this ability due to our language skills. The more languages we learn, the more diverse our thinking frameworks and abilities become.
Returning to the topic of education, should programming languages be included in the curriculum? I think they might eventually become a core subject like English, but one should never learn programming languages just for the sake of it. Moreover, AI might continue to polish the design of programming languages and computer systems. Whatever the case, we should embrace lifelong learning because everything evolves every day. If you are a middle, high school or a college student, I believe the most important thing is to follow the pyramid structure: the base is "self-confidence," the middle is "self-management," and the top is "self-learning." These will remain unchanged regardless of the AI era.
How do you view the challenges of limited data faced by startups during their early stages, and what strategies do you suggest for obtaining more data?
First and foremost, AI is already an open community, with numerous pre-trained base models and open datasets available. One strategy is to stand on the shoulders of giants, utilizing their well-trained base models, rather than attempting to train a new model from scratch, which could potentially cost millions of dollars. By fine-tuning the base model, you can add your own desired data and create a more specific model, much like teaching a child's brain a particular skill.. For example, in our influencer search application, we input the influencer data into the AI "brain," which can then answer questions such as "Who is the best influencer in Japan for promoting ramen?" With GPT, it can provide a direct response. Nowadays, every enterprise can start training their own AI "brain" at reduced costs.
As a result, we've transitioned from digital archives to intelligent archives. Previously, the data from vertical industries was the most valuable and held by individuals. By using GPT models to store this data, it becomes an in-house expert within your enterprise or research project. We indeed see a scarcity of vertical data and recognize its value. While AI technology may become widely accessible, data is ultimately the most crucial aspect. We do see the research moving towards small data. In comparison, the human brain is incredibly efficient, consuming only 20-25 watts of power, equivalent to the energy from eating a single hamburger to power a GPT for a whole day. However, running a GPT for an entire day could cost tens of thousands or even millions of dollars, making the human brain much more efficient. The human brain can learn impressive things from very little data, an area in which AI still falls short. Thus, there is no need to worry.
How should we contemplate the future AI ecosystem?
Recently, the AI research community's openness has played a crucial role in the rapid development of AI research in the past few years. From Google's Transformer in 2017, to Google's BERT in 2018, and Meta's RoBERTa in 2019, followed by Stanford's Foundation Model, the breakthroughs in AI are actually a result of continuous improvement by the community. However, with the rise of ChatGPT, there are concerns that this openness may be reversed. When the influence of AI has grown to the point of becoming a trade secret, leading AI companies will start protecting their intellectual property. From a pessimistic perspective, it may slow the pace of AI development , especially in the case of AI research coming from these big tech companies. The ecosystem may not continue to develop at the same rapid pace as the past decade, with everything being published. However, it will continue to progress because academic research is still pushing forward. Data will still be a big issue. From the industry perspective, AI will become like water and electricity, and the focus will shift to how to add value to existing business models by finding the right "fields".
Taiwan's advantage lies in its hardware manufacturing, which is globally renowned, whether in terms of chips or the entire hardware supply chain. These are irreplaceable assets, and the computing power required for AI is still insufficient. Therefore, we can expect Taiwan's semiconductor industry to continue to flourish, as the computing power needed for AI is currently in short supply, with only the largest companies able to access the latest hardware. These shortages will continue for the next few years. On the other hand, in the software industry, it is crucial to think globally from day one, and consider Taiwan as Israel or Singapore, because the software industry seeks scale. While Taiwan's population is relatively small at 23 million, it is definitely not enough to just focus on Taiwan, other markets must be considered as well. Whether it is AI or other software, on the first day, we must look at the world from Taiwan's perspective.
As a parent, what kind of AI world would Sega like its future children to live in, and what can we do now?
Expanding the timeline, maybe in 50, 100, or 200 years, human development of digital technology may just be a transitional phase. After 100 years, people may not even be talking about digital technology or AI anymore because they would have matured. At that time, AI will assist in crucial areas like gene editing, protein research, new drugs development, longevity, space exploration, and others. Therefore, I think this period of time is very important because technology is always neutral, human choices about how to use it will determine its impact. This is also a critical moment when AI's influence has reached every corner of society. We must make choices about which areas we really cannot use, which areas we should keep open, and which areas should be used with limitations. These decisions will pose significant challenges in the future.
Every time human society advances, it increases the polarization between people, and as progress continues to widen the gap, the most dangerous issue will be, why does technological progress impact the whole society? The problem lies not in the lack of improvement in human living standards, but in the increase in inequality. Although people today are better off than those 100 years ago, they are not happier or more content. It is because of inequality. If technology continues to widen the gap, then society will collapse. Therefore, I believe that this is a very significant problem that needs to be addressed by AI.
Thus, I think we must look at AI from a societal perspective, not just from a technical standpoint. As we hurtle towards progress, it is crucial to ensure that everyone can board the train and obtain a ticket. This is also a lesson I learned from the industrial revolution because reskilling and upskilling are difficult but necessary to absorb the impact of the entire technological revolution. Therefore, we need to look at AI from a humanistic perspective, not just from an AI perspective.