CEO Insights

Exploring the AI Trends with Meta's Groundbreaking Announcement of LLaMA 2

The First Major News in the AI Community for the Second Half of the Year: Meta Releases LLaMA 2, Fully Open Source and Free for Commercial Use.

In the AI Mid-Year Summary article shared last month, I mentioned that Meta had already achieved a significant leadership position in the AI open-source community with LLaMA. Now, Meta is leveraging its success and making a major move by launching LLaMA 2 today (July 19th) to further solidify its position.

Compared to the first generation of LLaMA, LLaMA 2 uses 2 trillion tokens (which represents the amount of text) for training and offers an input length of 4,096 tokens, which is twice the length of the previous generation.

Recently, the AI research community has been engaging in a competition regarding text length, focusing on the number of tokens that language models can process at once. This aims to enhance the language model's ability to demonstrate comprehensive contextual reasoning, leading to better AI performance.

OpenAI's ChatGPT, Anthropic's Claude, and some recently published research claim to have extended token lengths to 32,768, 100,000, and 1 million, respectively. While LLaMA 2 may seem shorter in comparison, it is crucial to note that LLaMA 2 is the only one among them that is open source.

In the announcement of LLaMA 2, Meta mentioned that its capabilities are still not as advanced as GPT-4. However, this precisely highlights a major challenge for OpenAI: In the future, every enterprise will need its own AI brain as an operational center, but this AI brain doesn't necessarily have to be as powerful or expensive as GPT-4.

What enterprises need is a customized AI model that can address their specific business challenges and "possess domain expertise". They don't require an all-knowing AI.

Since the first half of the year, with the initiation of the AI arms race centered around "downsizing the AI brain" (as mentioned in my previous article), the entire trend has been accelerating. LLaMA 2 has reached a new milestone as it is not only fully open source but also signifies a significant partnership announced by Meta.

Qualcomm and Meta have collaborated to integrate LLaMA 2 into smartphone chips, which will become a reality in 2024. This indicates that Meta has gained a first-mover advantage in the AI edge computing market. Currently, other big tech companies do not have a comparable open-source model to compete with LLaMA 2.

Let's not forget that in the past, Google achieved market dominance in the mobile operating system (OS) market by leveraging the open-source software Android. Meta missed the opportunity for mobile development and had to rely on the ecosystems of Apple and Google, constantly making adjustments due to privacy concerns and its advertising business models, as the ongoing competition among these three enterprises has never subsided.

This year, Zuckerberg decisively shifted Meta's focus from the metaverse to fully embrace AI. With (the accidental leak of) the first-generation LLaMA, Meta has opened up a new competitive landscape with the opportunity to explore deeper into everyone's smartphones.

Let's not forget what we have always emphasized: in the digital business domain, it's all about an ecosystem battle. Meta integrates three crucial weapons: AI chips, open-source AI models, and its existing powerful network effects. Amidst the clash between the two giants, Google and OpenAI/Microsoft, Meta has suddenly entered a new AI battlefield, aiming to start from community network applications and vertically delve into everyone's smartphone computing chips.

At this point, it has been proven that what I mentioned before: anyone claiming that Meta is absent or falling behind in the entire AI war is completely misjudging the situation.

Meta is not playing catch-up; instead, it has entered the AI battle from a completely different competitive angle. Many still have doubts about Zuckerberg's metaverse, but he is genuinely impressive. I have always advocated that AI's development will accelerate the growth of the metaverse. Looking back in a few years, we will realize that Zuckerberg was just taking a temporary detour.

According to rumors, Meta is conducting internal tests to deploy LLM (Large Language Model) on Messenger at a large scale. As the world's largest messaging platform, there is no better place for creating a large number of popular digital humans than Meta's portfolio. I'm quite certain that Meta will swiftly enter this market.

Therefore, the emerging generative AI companies, such as digital human companies, have been under the pressure lately. After all, once the tech giants catch up and assert their dominance, these companies could be heavily impacted.

The network effect remains the most advantageous moat controlled by big tech, and the second half of the year will be their home field. Startups relying solely on generative AI technology for entrepreneurship will face immense competitive pressure without a clear moat.

With the groundbreaking release of LLaMA 2, the industry has witnessed a bloom of open-source models being utilized to build various AI applications. Companies that have cautiously guarded their "exclusive large language models" as trade secrets and primary competitive advantages must swiftly construct an entire AI ecosystem and become new gateways to the internet. Otherwise, they risk being overshadowed and potentially consumed by the existing ecosystem of big tech companies.

Building a new ecosystem is not simple, and these enterprises will face a critical moment of survival in the second half of the year. The rapid pace of market change is truly remarkable.

CEO Insights

11 Key Highlights of Global AI Technology Developments in the First Half of 2023

In the first half of 2023, there has been a significant leap forward in AI technology development. This article aims to share 11 key insights derived from our practical experiences in AI, primarily based on iKala's AI-powered products,including the influencer marketing platform, KOL Radar, which holds over 1 million cross-country influencer data worldwide, and our customer data platform, iKala CDP. Moreover, the insights are also drawn from our R&D efforts in enabling clients to implement AI solutions. iKala remains committed to sharing our achievements through platforms such as Hugging Face and public research papers.

1. Does downsizing the AI model (AI brain) still retain the same level of intelligence?

Achieving this task is challenging as the number of parameters in large-scale language models (LLMs) remains a decisive factor in their capabilities. We tested over 40 models, with most being below 30B, and found that reducing the size of the AI brain would hinder its ability to maintain the same level of comprehension. The paper "The False Promise of imitating proprietary LLMs" summarizes this finding. However, who wouldn't desire a model that is small, performs well, and operates quickly? Therefore, the research community continues to invest significant effort in downsizing models. It is worth noting that the AI's ability to write programs seems to have no inherent correlation with model size, which diverges from trends observed in other tasks that require human cognition. Currently, we are unable to explain this phenomenon.

2. The Competitive Race of Exaggerated Claims in Open-Source Language Models Across Various Sectors

Continuing from the previous point, in the past six months, numerous public models have been released. Many companies and developers claim that with just a few hundred (or thousand) dollars, a small amount of training data, and fast training speed, they can achieve performance comparable to GPT-4 with 87% similarity. However, these seemingly remarkable results should be taken as references only, and it is crucial to deploy and reproduce these experiments to validate their authenticity.

3. The trend of AI privatization and customization has already emerged

Customers who have adopted AI at an accelerated pace have widely acknowledged the difficulty of replicating large-scale models such as GPT-4, Claude, Midjourney, PaLM 2, and others. It is also unnecessary to allocate significant resources solely for the purpose of integrating these large models into their own businesses. For the majority of enterprises, a "general-purpose LLM" is not necessary. Instead, they require language models with capabilities tailored to their specific business models.

4. Weaken the overall model or remove specific capabilities

Due to the low feasibility of replicating large models and the urgency for enterprises to adopt AI, the current direction is to train "industry-specific models." The approach involves "directly removing specific capabilities of the language model (e.g., only being able to listen, speak, and read but not write)" or "weakening the entire model (e.g., compromising proficiency in listening, speaking, reading, and writing simultaneously)." This aspect can be referred to in the paper "Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes."

5. Dual-track thinking in enterprise AI implementation

Business owners are currently contemplating how to apply AI to their existing business models and enhance the productivity of their workforce. Organizations are actively breaking down internal workflows, streamlining processes, and automating tasks to improve individual productivity. Simultaneously, they are exploring the potential added value of AI in their business models. Many of our customer companies start by establishing a "data hub" with iKala CDP, accelerating data organization while exploring AI opportunities. Data is essential for AI, and since AI will undoubtedly play a significant role in the future, it is crucial to address important tasks while navigating the exploration phase. As a result, we are witnessing rapid market development in Big Data and Cloud, driven by AI. Unlike past digital transformations that lacked unclear objectives, the current focus is on achieving "intelligence" with well-defined goals that demonstrate the effectiveness of AI. Consequently, business owners are increasingly investing in AI.

6. The trend of platformization for large-scale models has emerged

Continuing from the discussion of large-scale models and referring back to the concept of economies of scale, it is only the Tech Giants who can afford the training and operational costs of these models. By reducing unit service costs, they create barriers for smaller competitors. Consequently, these models are moving towards platformization. They serve as the foundation for private models, allowing external companies and developers to access the outcomes generated by these large models at a low cost. However, access to the details is limited. Currently, there are insufficient incentives for Tech Giants to disclose the specifics of these large models, with Meta being an exception as mentioned in the next point. Unless regulated and intervened by governments, this path remains lengthy and challenging. Ultimately, policymakers may seek a balance between commercial interests and national (or regional) governance in their relationship with Tech Giants. Overall, Tech Giants are unlikely to face significant harm.

7. Meta has already taken the lead in the open-source AI community

With the widespread popularity of LLaMA and SAM (Segment Anything Model), Meta has regained a significant level of influence in the AI landscape. However, what sets Meta apart is that the company does not rely directly on AI for revenue generation. While Google, Amazon, and Microsoft offer AI services on their cloud platforms for enterprise leasing, and OpenAI sells subscriptions for ChatGPT, Meta continues to generate substantial advertising revenue through its immensely powerful community platform and network effects. Therefore, Meta's commitment to openness in AI undoubtedly surpasses that of other Tech Giants.

8. The development of AI models is predominantly led by the English language system

The majority of (open-source) models clearly perform best in English, which widens the technological development gap between Western countries and the rest of the world. It also influences the future usage habits of global users and may even jeopardize the prevalence of certain languages. As a result, governments and large private enterprises in various countries have started developing their own large-scale models to counter this "linguistic hegemony." However, even with significant resources invested in training country-specific models, the key factor remains the demand for their usage. When training AI models reaches a national level, it becomes a matter of marketing and service rather than technical challenges. Therefore, relevant entities in each country should focus their efforts accordingly.

9. The majority of generative AI startups lack a competitive advantage

Even the sustainability of ChatGPT's own business model remains uncertain, let alone other startups that rely solely on OpenAI's API. While companies that have achieved economies of scale in specific business domains may progress at a slower pace, once they clarify how to implement and apply AI, they can surpass these startups by a significant margin. Additionally, Tech Giants continuously reduce the cost of using generative AI through economies of scale, further amplifying the challenges faced by generative AI startups. Therefore, the primary focus of AI still lies in the "application domain." Launching a startup solely based on AI technology is a highly challenging endeavor that often requires immediate capital involvement.

10. Artificial General Intelligence (AGI)

The goal of achieving AGI is still distant, although the unexpected research results of GPT-4 and DeepMind in RL suggest the possibility of machines taking autonomous actions. However, most current efforts to pursue AGI involve high-level combinations of various toolchains with existing LLMs, and the tasks are tackled incrementally. The high costs involved have resulted in the abandonment of many open-source projects, and a fully generalizable solution has not yet emerged. The current progress is more reminiscent of Robotic Process Automation (RPA) rather than AGI.

11. To unleash the value of AI, the most crucial factors are "trust," "user experience," and "business model"

This concludes the three essential elements for AI to expand its presence in every aspect of human society. While the field of Explainable AI (XAI) is experiencing rapid growth, it is still in its early development stages. Large-scale models like GPT-4 continue to be enigmatic to most, and there is much progress to be made in comprehending their decision-making and reasoning processes. User experience presents another significant issue as integrating AI into existing products and services brings forth exciting new opportunities while also posing a challenge to users' established habits. As for the business model, it is as mentioned above.

CEO Insights

Unleashing the Value of AI for Individuals and Enterprises (Part 2): Viewing AI from a Humanistic Perspective

To keep Taiwan's industry, government, academia, and research institutions ahead of the global technology and industry trends amid this wave of transformation, the Taiwan Science and Technology Hub (Taiwan S&T Hub) has invited internationally renowned AI expert Dr. Fei-Fei Li, current director of Stanford's Human-Centered AI Institute, Chief Scientist and Researcher of ImageNet, and co-founder and chairperson of AI4ALL, to participate in the "What we see and what we value: AI with a human perspective" roundtable forum held on March 23. Together with local experts, they discussed how to make AI a key driving force for the betterment of humanity. The forum was moderated by Erica Lu, a social affairs consultant at the Business Next, and featured guests including the Pegatron Corporation Chairman Tung Tzu-Hsien, PChome CEO Alice Chang, and iKala co-founder and CEO Sega Cheng, among other industry experts.

The following article is based on the insights shared by iKala co-founder and CEO Sega Cheng during the roundtable discussion (Part 2).

Erica Lu:

In the audience, we have a high school student who would like to ask: Nowadays, we encourage everyone to learn about programming and software, and even to write code. In the age of AI, to what extent should we learn AI? Should we focus on learning AI tools, understanding the underlying principles, or actually developing a model?

Sega Cheng:

I have an eight-year-old daughter myself, so I have been considering whether children should learn programming languages at an early age. Let me first share my thoughts on programming languages. I believe that there is no need to invest too early in learning them, and we don't need to turn it into a nationwide movement.

We should look at this issue of children learning programming languages from two perspectives. The first one is from the perspective of AI. There is a research direction focused on redesigning computer systems. Computer systems have been developed with layers of abstraction, which is primarily for human convenience. As a result,it brought the use of many intermediate languages, followed by programming languages and natural languages. However, AI does not require these intermediate parts. So, my expectation is that the future research direction of programming languages will involve AI designing highly efficient programming languages, potentially new ones.  Programming languages will evolve faster than before and the choices we make now will be completely different from those we make ten years from now. AI will likely accelerate the iteration of programming languages, resulting in the most efficient programming language in human history. It won't be too late to start learning it then.

The second perspective is that our interaction with AI has already reached the level of natural language. I often hear language educators say, "Oh no! Language institutions won't be needed anymore since AI can be a teacher.", but I disagree. Natural language skills will only become more critical. Not only do you need to communicate with ChatGPT and design good prompts, but you also need to be able to guide the correct answers and ask the right questions. This skill is not only crucial for communication with humans but also with AI. AI contains vast knowledge waiting to be accessed, but you have to ask the right questions to obtain wisdom. So, I think natural language is even more important. When it comes to education, I believe that natural language is more significant than programming languages. For humans, language is not just a skill; learning an additional language means acquiring a new way of thinking, which is something AI cannot replace. As a result, a person who can speak multiple languages can have more critical thinking, diverse perspectives, and innovative combinations. Innovation essentially involves connecting distant ideas, and humans have this ability due to our language skills. The more languages we learn, the more diverse our thinking frameworks and abilities become.

Returning to the topic of education, should programming languages be included in the curriculum? I think they might eventually become a core subject like English, but one should never learn programming languages just for the sake of it. Moreover, AI might continue to polish the design of programming languages and computer systems. Whatever the case, we should embrace lifelong learning because everything evolves every day. If you are a middle, high school or a college student, I believe the most important thing is to follow the pyramid structure: the base is "self-confidence," the middle is "self-management," and the top is "self-learning." These will remain unchanged regardless of the AI era.

Erica Lu:

How do you view the challenges of limited data faced by startups during their early stages, and what strategies do you suggest for obtaining more data?

Sega Cheng:

First and foremost, AI is already an open community, with numerous pre-trained base models and open datasets available. One strategy is to stand on the shoulders of giants, utilizing their well-trained base models, rather than attempting to train a new model from scratch, which could potentially cost millions of dollars. By fine-tuning the base model, you can add your own desired data and create a more specific model, much like teaching a child's brain a particular skill.. For example, in our influencer search application, we input the influencer data into the AI "brain," which can then answer questions such as "Who is the best influencer in Japan for promoting ramen?" With GPT, it can provide a direct response. Nowadays, every enterprise can start training their own AI "brain" at reduced costs.

As a result, we've transitioned from digital archives to intelligent archives. Previously, the data from vertical industries was the most valuable and held by individuals. By using GPT models to store this data, it becomes an in-house expert within your enterprise or research project. We indeed see a scarcity of vertical data and recognize its value. While AI technology may become widely accessible, data is ultimately the most crucial aspect. We do see the research moving towards small data. In comparison, the human brain is incredibly efficient, consuming only 20-25 watts of power, equivalent to the energy from eating a single hamburger to power a GPT for a whole day. However, running a GPT for an entire day could cost tens of thousands or even millions of dollars, making the human brain much more efficient. The human brain can learn impressive things from very little data, an area in which AI still falls short. Thus, there is no need to worry.

Erica Lu:

How should we contemplate the future AI ecosystem?

Sega Cheng:

Recently, the AI research community's openness has played a crucial role in the rapid development of AI research in the past few years. From Google's Transformer in 2017, to Google's BERT in 2018, and Meta's RoBERTa in 2019, followed by Stanford's Foundation Model, the breakthroughs in AI are actually a result of continuous improvement by the community. However, with the rise of ChatGPT, there are concerns that this openness may be reversed. When the influence of AI has grown to the point of becoming a trade secret, leading AI companies will start protecting their intellectual property. From a pessimistic perspective, it may slow the pace of AI development , especially in the case of AI research coming from these big tech companies. The ecosystem may not continue to develop at the same rapid pace as the past decade, with everything being published. However, it will continue to progress because academic research is still pushing forward. Data will still be a big issue. From the industry perspective, AI will become like water and electricity, and the focus will shift to how to add value to existing business models by finding the right "fields". 

Taiwan's advantage lies in its hardware manufacturing, which is globally renowned, whether in terms of chips or the entire hardware supply chain. These are irreplaceable assets, and the computing power required for AI is still insufficient. Therefore, we can expect Taiwan's semiconductor industry to continue to flourish, as the computing power needed for AI is currently in short supply, with only the largest companies able to access the latest hardware. These shortages will continue for the next few years. On the other hand, in the software industry, it is crucial to think globally from day one, and consider Taiwan as Israel or Singapore, because the software industry seeks scale. While Taiwan's population is relatively small at 23 million, it is definitely not enough to just focus on Taiwan, other markets must be considered as well. Whether it is AI or other software, on the first day, we must look at the world from Taiwan's perspective.

Erica Lu:

As a parent, what kind of AI world would Sega like its future children to live in, and what can we do now?

Sega Cheng:

Expanding the timeline, maybe in 50, 100, or 200 years, human development of digital technology may just be a transitional phase. After 100 years, people may not even be talking about digital technology or AI anymore because they would have matured. At that time, AI will assist in crucial areas like gene editing, protein research, new drugs development, longevity, space exploration, and others. Therefore, I think this period of time is very important because technology is always neutral, human choices about how to use it will determine its impact. This is also a critical moment when AI's influence has reached every corner of society. We must make choices about which areas we really cannot use, which areas we should keep open, and which areas should be used with limitations. These decisions will pose significant challenges in the future.

Every time human society advances, it increases the polarization between people, and as progress continues to widen the gap, the most dangerous issue will be, why does technological progress impact the whole society? The problem lies not in the lack of improvement in human living standards, but in the increase in inequality. Although people today are better off than those  100 years ago, they are not happier or more content. It is because of inequality. If technology continues to widen the gap, then society will collapse. Therefore, I believe that this is a very significant problem that needs to be addressed by AI.

Thus, I think we must look at AI from a societal perspective, not just from a technical standpoint. As we hurtle towards progress, it is crucial to ensure that everyone can board the train and obtain a ticket. This is also a lesson I learned from the industrial revolution because reskilling and upskilling are difficult but necessary to absorb the impact of the entire technological revolution. Therefore, we need to look at AI from a humanistic perspective, not just from an AI perspective.

CEO Insights

Unleashing the Value of AI for Individuals and Enterprises (Part 1): The Importance of Fields Where AI is Applied

To keep Taiwan's industry, government, academia, and research institutions ahead of the global technology and industry trends amid this wave of transformation, the Taiwan Science and Technology Hub (Taiwan S&T Hub) has invited internationally renowned AI expert Dr. Fei-Fei Li, current director of Stanford's Human-Centered AI Institute, Chief Scientist and Researcher of ImageNet, and co-founder and chairperson of AI4ALL, to participate in the "What we see and what we value: AI with a human perspective" roundtable forum held on March 23. Together with local experts, they discussed how to make AI a key driving force for the betterment of humanity. The forum was moderated by Erica Lu, a social affairs consultant at the Business Next, and featured guests including the Pegatron Corporation Chairman Tung Tzu-Hsien, PChome CEO Alice Chang, and iKala co-founder and CEO Sega Cheng, among other industry experts.

The following article is based on the insights shared by iKala co-founder and CEO Sega Cheng during the roundtable discussion (Part 1).

Erica Lu:

As someone who comes from a software engineering background and has founded a startup that has been focusing on software development, you've even made AI empowerment an important goal for your company. Could you please share with us your thoughts on how AI is shaping the future of software, industry, and software services?

Sega Cheng:

In fact, when we were doing Vision research at Stanford Computer Science 18 years ago, it was really challenging, because there was no Cloud, no Big Data, no AI, and no iPhone. That was back in 2006. So now when we talk about ABC, which stands for AI, Big Data, and Cloud, it actually developed in reverse order. First, there was Cloud, followed by Big Data, and then AI. Once AI had access to algorithms, computing power, and data, it quickly became effective. This has been the development pattern of AI. That's why iKala initially called itself a Human-Centered AI Company, as we believe AI is an augmentation, not a replacement. In fact, from an industry perspective, whether it's under the fields of healthcare, retail, or even military and defense, AI requires a specific field to be useful; without a field, AI is of no use. Regarding the turning point brought by GPT, we were actually quite surprised. We thought that GPT-like technology would only appear in three years, but OpenAI brought it forward by three years.

For the industry, we see this as a positive development. People have already started talking about AI's Moore's Law, which now seems to double its capabilities or reduce its costs approximately every three months, rather than waiting until every year and a half or two years. Since the release of ChatGPT, the software industry has been trying to shrink the AI "brain". Tesla's former AI director, Andrej, initiated the nanoGPT project, aiming to continually reduce AI in terms of training costs, deployment costs, application costs, etc. Thus, AI has begun to exhibit Moore's Law, and given the openness of the AI research community, we can expect rapid progress. There are a variety of models and datasets, and as soon as a paper is published, its datasets and some source code are openly available, enabling AI advancements to happen very quickly. We expect that under this new Moore's Law, AI will eventually become accessible to everyone, with the cost of obtaining intelligence becoming lower and lower.

That's why we think AI empowerment is so important. In the past, when talking about digital transformation, people were skeptical because they couldn't calculate the return on investment. They were uncertain about the long-term cost benefits. But if the purpose of digital transformation is to gain intelligence, then it will become a new wave for all industries. So, when the cost of gaining intelligence is almost zero, it brings up questions like "Will my existing workflow be disrupted? Will my personal productivity increase tenfold?" All of these are possible. So, when we look at AI empowerment, we treat AI as a utility, like water and electricity. In 10 or 20 years, we may not talk about AI anymore, because it will be as ubiquitous as water or electricity, and just like we don't think about how the entire electrical grid works when we plug in our phones or use a bread machine.

In the future, AI will have reached a point where the cost is low enough and everyone, even non-AI experts, can use it. As Professor Fei-Fei Li mentioned, the world seems to turn over every time we wake up. Now, we must understand that we've always treated AI as a value-added service. When we talk about the impact on industries, I think AI will open up countless new possibilities, such as protein folding, better understanding of consumers, and a multitude of scientific experiments. It has already revolutionized scientific exploration, so AI can even have the potential to become a utility in research. We believe the opportunities brought by GPT and large-scale language models are boundless! In our long-term work with influencer searches, we've seen search evolve from keyword search to the next step of natural language search. What ChatGPT has shown us is that when humans can interact with computers in a comfortable and natural way, it represents a revolution in the software industry brought about by AI. Therefore, we can expect natural language search to become increasingly developed in the future.

As AI continues to advance and integrate into various industries, it will not only transform existing processes but also create new opportunities and potential applications. With the rapid development of AI technologies like GPT, the landscape of software and services is changing at an unprecedented pace. AI is expected to become an indispensable part of our daily lives, much like utilities such as water and electricity. In this future, everyone will be able to utilize AI to enhance their productivity, problem-solving abilities, and overall quality of life, making AI empowerment an integral aspect of our world.

Erica Lu:

When you first started your own business, it was inspired by the birth of the internet and the landscape was uncertain. However, there were many difficulties in embarking on a venture and delving into a technology, and you yourself have gone through several transformations. Now, as we enter a new era of technological advancements, what advice or thoughts do you have for those interested in investing in startups or exploring AI entrepreneurship in the future?

Sega Cheng:

When it comes to generative AI, my primary advice is to avoid starting a generative AI company. The main reason is that the success rate of startups is inherently low, with 90% of companies failing within five years, even in Silicon Valley. As we discussed earlier, AI requires specific fields, so I believe even OpenAI will need broader and deeper moats to succeed. With language models  readily available on GitHub, anyone can deploy their own applications for generating text or images at a low cost.

We realized this in 2018 when we introduced AI-driven MarTech to our clients. At that time, we developed a feature called Picaas, a feature that employed AI  to remove background clutter in images and filled in gaps to create a clean, unaltered photo. This functionality has now been adopted by Google Photos as Magic Eraser. When we launched this feature in 2018, we encountered two challenges. First, we found it challenging to scale the business from a technical standpoint, leading us to realize that startups should prioritize customer needs rather than technology. Although AI has the potential to perform incredible tasks, what do you want it to do for you? How can it be applied to your industry? McKinsey's research shows that 70% of AI's value comes from added services that enhance existing business models, rather than creating entirely new ones. This was a valuable lesson that we learned from our first experience with AI.

The second lesson was centered around an ethical issue that arose after launching the background removal and replacement feature. The design community expressed concerns about the potential for image theft and alteration. In response, we retrained our model to identify and avoid modifying images from paid libraries, preventing copyright issues.  By addressing these concerns, we gained valuable insights into the feasibility of generative AI's business models.

On the other hand, while people worry that generative AI may lead to job loss, AI inherently replaces tasks rather than entire occupations. It leads to a "deconstruction" of job roles rather than the sudden disappearance of jobs. Throughout human history, it's rare for a group of jobs to vanish suddenly. Instead, as technology advances, it replaces certain tasks. For example, ChatGPT is great for summarizing information, but you'll find it takes a lot of time to edit its output. The time saved on ideation, summarization, and drafting still increases productivity, but time spent on certain tasks may increase or decrease. When looking at how AI technology can be popularized and create consumer value, it's essential to deconstruct the entire workflow and determine which tasks AI can and cannot solve.

We've been discussing generative AI and software, but robotics will also experience significant advancements this year due to GPT and multi-model technologies. In Professor Fei-Fei Li's lab, robots can now recognize and execute over a thousand actions. From online to physical applications, we can expect substantial progress in the coming quarters.

Concerning the risk of AI technology running amok, we don't need to worry too much about narrow or weak AI. However, when AI becomes a utility, we can expect governments to regulate it similarly to how they regulate water, electricity, and national defense. As AI transforms into a utility, governments worldwide will intensify their supervision and regulation of this technology.

CEO Insights

An Overview of Business Model Evolution in the Technology Industry Through the emergence of ChatGPT

Many believe that multinational tech giants continuously create new business value through sheer technological breakthroughs and open sourcing their results, however, this perspective is only partially correct. The reason they are able to continuously create significant value is not just due to their technology, but also because of the "network externality", also known as the network effect, and the economic principle of "increasing marginal revenue".

"Network externality" has been fully understood by the industry since the rise of the digital economy. Simply put, a few technology companies have control over user scale and service entry points. In this case, not only does increased usage of a service lead to its increased value, creating a positive cycle, but even small improvements to the service can generate substantial new business value. In the classic book "Zero to One," Peter Thiel also emphasizes the importance of network externality, which is inherent to digital products.

Imagine, if Facebook or Google improves its computer vision technology by 1% in recognition rate, it may represent a new advertising value of over 1 billion or even up to 10 billion US dollars. But for a startup company, the added value would be relatively limited as the foundation and economies of scale for advertising is dominated by technology giants, rather than general start-ups.

"Increasing marginal revenue" is an inherent phenomenon in the knowledge economy. It refers to the fact that as more knowledge and technology are invested in a knowledge-dependent economy, output will increase and producer revenue will also trend upward. In contrast, traditional agricultural and industrial economies rely on material resources, which have a distinct exclusionary characteristic: their value can only be utilized by one user at a time. Furthermore, these resources are scarce, and as their usage increases, costs become higher, ultimately leading to a decrease in producer revenue. Knowledge-based resources are shareable, and the same knowledge can be simultaneously occupied and used by multiple people. Also, they are not consumed, but rather utilized, and generate new knowledge during use. Information and knowledge resources accumulate and develop with use, and their cost decreases with repetitive use, leading to increasing revenue.

Why do technology giants continuously open source their cutting-edge technology without reservation? Because open source code is not their core competitive advantage. Instead, the real competitive advantages lie in "network effects" and "increasing marginal revenue." Open-sourcing these code is to further utilize existing network externalities, to create a larger user base. A startup company that develops a framework like TensorFlow or PyTorch might carefully protect it as intellectual property, see it as a core asset, and even try to profit directly from it; but for big tech, TensorFlow or PyTorch is just like any other product, an amazing technological innovation, but its core purpose is to expand the network effects of its other products. These two business logic are fundamentally different.

With the combination of these two effects, the phenomenon of "big gets bigger" in the digital economy occurs. That's why the global technology giants can continuously attract top talents, continuously invest in new technology and knowledge-based products, and continue to grow year after year. Scholars from Harvard Business School have also summarized this economic phenomenon with a simpler term: "hub economy". This means that companies who control the entry points for users and services have a significant competitive advantage."

However, the emergence of ChatGPT has shaken the competitive advantage in the industry. Before November 2022, it was unexpected that a non-profit organization such as OpenAI would achieve such success with deep learning. With funding from Microsoft's investment, OpenAI was able to train and launch ChatGPT, resulting in its overnight popularity. Its applications might quickly expand into various fields, and may also have a significant impact on the core business models of technology giants.

Certainly, technology giants still have powerful product channels and network effects, and are actively deploying technology equivalent to ChatGPT into each of their products, to strengthen their own network effects and defend against the threat posed by ChatGPT. Should technology giants continue to share their advanced AI technology with nimble competitors, who may then use it against them? The industry has likely begun to take notice of this potential issue, which is not a technical challenge or a question of high-minded universal values, but a fundamental business problem.

On the other hand, some start-ups have built their business models directly on GPT-3, and their decisions and actions are already considered fast. However, OpenAI recently announced the launch of ChatGPT, which is based on GPT-3.5, and it is rumored that GPT-4 is on the horizon. This is significant and unpredictable for companies that have built their entire business model on GPT-3, akin to a black swan event. Nevertheless, this is the pace of technological evolution in the industry today.

We are now witnessing a historic transformation in the technology industry, both in terms of technology and business.