Categories
CEO Insights

Exploring the AI Trends with Meta's Groundbreaking Announcement of LLaMA 2

The First Major News in the AI Community for the Second Half of the Year: Meta Releases LLaMA 2, Fully Open Source and Free for Commercial Use.

In the AI Mid-Year Summary article shared last month, I mentioned that Meta had already achieved a significant leadership position in the AI open-source community with LLaMA. Now, Meta is leveraging its success and making a major move by launching LLaMA 2 today (July 19th) to further solidify its position.

Compared to the first generation of LLaMA, LLaMA 2 uses 2 trillion tokens (which represents the amount of text) for training and offers an input length of 4,096 tokens, which is twice the length of the previous generation.

Recently, the AI research community has been engaging in a competition regarding text length, focusing on the number of tokens that language models can process at once. This aims to enhance the language model's ability to demonstrate comprehensive contextual reasoning, leading to better AI performance.

OpenAI's ChatGPT, Anthropic's Claude, and some recently published research claim to have extended token lengths to 32,768, 100,000, and 1 million, respectively. While LLaMA 2 may seem shorter in comparison, it is crucial to note that LLaMA 2 is the only one among them that is open source.

In the announcement of LLaMA 2, Meta mentioned that its capabilities are still not as advanced as GPT-4. However, this precisely highlights a major challenge for OpenAI: In the future, every enterprise will need its own AI brain as an operational center, but this AI brain doesn't necessarily have to be as powerful or expensive as GPT-4.

What enterprises need is a customized AI model that can address their specific business challenges and "possess domain expertise". They don't require an all-knowing AI.

Since the first half of the year, with the initiation of the AI arms race centered around "downsizing the AI brain" (as mentioned in my previous article), the entire trend has been accelerating. LLaMA 2 has reached a new milestone as it is not only fully open source but also signifies a significant partnership announced by Meta.

Qualcomm and Meta have collaborated to integrate LLaMA 2 into smartphone chips, which will become a reality in 2024. This indicates that Meta has gained a first-mover advantage in the AI edge computing market. Currently, other big tech companies do not have a comparable open-source model to compete with LLaMA 2.

Let's not forget that in the past, Google achieved market dominance in the mobile operating system (OS) market by leveraging the open-source software Android. Meta missed the opportunity for mobile development and had to rely on the ecosystems of Apple and Google, constantly making adjustments due to privacy concerns and its advertising business models, as the ongoing competition among these three enterprises has never subsided.

This year, Zuckerberg decisively shifted Meta's focus from the metaverse to fully embrace AI. With (the accidental leak of) the first-generation LLaMA, Meta has opened up a new competitive landscape with the opportunity to explore deeper into everyone's smartphones.

Let's not forget what we have always emphasized: in the digital business domain, it's all about an ecosystem battle. Meta integrates three crucial weapons: AI chips, open-source AI models, and its existing powerful network effects. Amidst the clash between the two giants, Google and OpenAI/Microsoft, Meta has suddenly entered a new AI battlefield, aiming to start from community network applications and vertically delve into everyone's smartphone computing chips.

At this point, it has been proven that what I mentioned before: anyone claiming that Meta is absent or falling behind in the entire AI war is completely misjudging the situation.

Meta is not playing catch-up; instead, it has entered the AI battle from a completely different competitive angle. Many still have doubts about Zuckerberg's metaverse, but he is genuinely impressive. I have always advocated that AI's development will accelerate the growth of the metaverse. Looking back in a few years, we will realize that Zuckerberg was just taking a temporary detour.

According to rumors, Meta is conducting internal tests to deploy LLM (Large Language Model) on Messenger at a large scale. As the world's largest messaging platform, there is no better place for creating a large number of popular digital humans than Meta's portfolio. I'm quite certain that Meta will swiftly enter this market.

Therefore, the emerging generative AI companies, such as digital human companies, have been under the pressure lately. After all, once the tech giants catch up and assert their dominance, these companies could be heavily impacted.

The network effect remains the most advantageous moat controlled by big tech, and the second half of the year will be their home field. Startups relying solely on generative AI technology for entrepreneurship will face immense competitive pressure without a clear moat.

With the groundbreaking release of LLaMA 2, the industry has witnessed a bloom of open-source models being utilized to build various AI applications. Companies that have cautiously guarded their "exclusive large language models" as trade secrets and primary competitive advantages must swiftly construct an entire AI ecosystem and become new gateways to the internet. Otherwise, they risk being overshadowed and potentially consumed by the existing ecosystem of big tech companies.

Building a new ecosystem is not simple, and these enterprises will face a critical moment of survival in the second half of the year. The rapid pace of market change is truly remarkable.

Categories
CEO Insights

11 Key Highlights of Global AI Technology Developments in the First Half of 2023

In the first half of 2023, there has been a significant leap forward in AI technology development. This article aims to share 11 key insights derived from our practical experiences in AI, primarily based on iKala's AI-powered products,including the influencer marketing platform, KOL Radar, which holds over 1 million cross-country influencer data worldwide, and our customer data platform, iKala CDP. Moreover, the insights are also drawn from our R&D efforts in enabling clients to implement AI solutions. iKala remains committed to sharing our achievements through platforms such as Hugging Face and public research papers.

1. Does downsizing the AI model (AI brain) still retain the same level of intelligence?

Achieving this task is challenging as the number of parameters in large-scale language models (LLMs) remains a decisive factor in their capabilities. We tested over 40 models, with most being below 30B, and found that reducing the size of the AI brain would hinder its ability to maintain the same level of comprehension. The paper "The False Promise of imitating proprietary LLMs" summarizes this finding. However, who wouldn't desire a model that is small, performs well, and operates quickly? Therefore, the research community continues to invest significant effort in downsizing models. It is worth noting that the AI's ability to write programs seems to have no inherent correlation with model size, which diverges from trends observed in other tasks that require human cognition. Currently, we are unable to explain this phenomenon.

2. The Competitive Race of Exaggerated Claims in Open-Source Language Models Across Various Sectors

Continuing from the previous point, in the past six months, numerous public models have been released. Many companies and developers claim that with just a few hundred (or thousand) dollars, a small amount of training data, and fast training speed, they can achieve performance comparable to GPT-4 with 87% similarity. However, these seemingly remarkable results should be taken as references only, and it is crucial to deploy and reproduce these experiments to validate their authenticity.

3. The trend of AI privatization and customization has already emerged

Customers who have adopted AI at an accelerated pace have widely acknowledged the difficulty of replicating large-scale models such as GPT-4, Claude, Midjourney, PaLM 2, and others. It is also unnecessary to allocate significant resources solely for the purpose of integrating these large models into their own businesses. For the majority of enterprises, a "general-purpose LLM" is not necessary. Instead, they require language models with capabilities tailored to their specific business models.

4. Weaken the overall model or remove specific capabilities

Due to the low feasibility of replicating large models and the urgency for enterprises to adopt AI, the current direction is to train "industry-specific models." The approach involves "directly removing specific capabilities of the language model (e.g., only being able to listen, speak, and read but not write)" or "weakening the entire model (e.g., compromising proficiency in listening, speaking, reading, and writing simultaneously)." This aspect can be referred to in the paper "Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes."

5. Dual-track thinking in enterprise AI implementation

Business owners are currently contemplating how to apply AI to their existing business models and enhance the productivity of their workforce. Organizations are actively breaking down internal workflows, streamlining processes, and automating tasks to improve individual productivity. Simultaneously, they are exploring the potential added value of AI in their business models. Many of our customer companies start by establishing a "data hub" with iKala CDP, accelerating data organization while exploring AI opportunities. Data is essential for AI, and since AI will undoubtedly play a significant role in the future, it is crucial to address important tasks while navigating the exploration phase. As a result, we are witnessing rapid market development in Big Data and Cloud, driven by AI. Unlike past digital transformations that lacked unclear objectives, the current focus is on achieving "intelligence" with well-defined goals that demonstrate the effectiveness of AI. Consequently, business owners are increasingly investing in AI.

6. The trend of platformization for large-scale models has emerged

Continuing from the discussion of large-scale models and referring back to the concept of economies of scale, it is only the Tech Giants who can afford the training and operational costs of these models. By reducing unit service costs, they create barriers for smaller competitors. Consequently, these models are moving towards platformization. They serve as the foundation for private models, allowing external companies and developers to access the outcomes generated by these large models at a low cost. However, access to the details is limited. Currently, there are insufficient incentives for Tech Giants to disclose the specifics of these large models, with Meta being an exception as mentioned in the next point. Unless regulated and intervened by governments, this path remains lengthy and challenging. Ultimately, policymakers may seek a balance between commercial interests and national (or regional) governance in their relationship with Tech Giants. Overall, Tech Giants are unlikely to face significant harm.

7. Meta has already taken the lead in the open-source AI community

With the widespread popularity of LLaMA and SAM (Segment Anything Model), Meta has regained a significant level of influence in the AI landscape. However, what sets Meta apart is that the company does not rely directly on AI for revenue generation. While Google, Amazon, and Microsoft offer AI services on their cloud platforms for enterprise leasing, and OpenAI sells subscriptions for ChatGPT, Meta continues to generate substantial advertising revenue through its immensely powerful community platform and network effects. Therefore, Meta's commitment to openness in AI undoubtedly surpasses that of other Tech Giants.

8. The development of AI models is predominantly led by the English language system

The majority of (open-source) models clearly perform best in English, which widens the technological development gap between Western countries and the rest of the world. It also influences the future usage habits of global users and may even jeopardize the prevalence of certain languages. As a result, governments and large private enterprises in various countries have started developing their own large-scale models to counter this "linguistic hegemony." However, even with significant resources invested in training country-specific models, the key factor remains the demand for their usage. When training AI models reaches a national level, it becomes a matter of marketing and service rather than technical challenges. Therefore, relevant entities in each country should focus their efforts accordingly.

9. The majority of generative AI startups lack a competitive advantage

Even the sustainability of ChatGPT's own business model remains uncertain, let alone other startups that rely solely on OpenAI's API. While companies that have achieved economies of scale in specific business domains may progress at a slower pace, once they clarify how to implement and apply AI, they can surpass these startups by a significant margin. Additionally, Tech Giants continuously reduce the cost of using generative AI through economies of scale, further amplifying the challenges faced by generative AI startups. Therefore, the primary focus of AI still lies in the "application domain." Launching a startup solely based on AI technology is a highly challenging endeavor that often requires immediate capital involvement.

10. Artificial General Intelligence (AGI)

The goal of achieving AGI is still distant, although the unexpected research results of GPT-4 and DeepMind in RL suggest the possibility of machines taking autonomous actions. However, most current efforts to pursue AGI involve high-level combinations of various toolchains with existing LLMs, and the tasks are tackled incrementally. The high costs involved have resulted in the abandonment of many open-source projects, and a fully generalizable solution has not yet emerged. The current progress is more reminiscent of Robotic Process Automation (RPA) rather than AGI.

11. To unleash the value of AI, the most crucial factors are "trust," "user experience," and "business model"

This concludes the three essential elements for AI to expand its presence in every aspect of human society. While the field of Explainable AI (XAI) is experiencing rapid growth, it is still in its early development stages. Large-scale models like GPT-4 continue to be enigmatic to most, and there is much progress to be made in comprehending their decision-making and reasoning processes. User experience presents another significant issue as integrating AI into existing products and services brings forth exciting new opportunities while also posing a challenge to users' established habits. As for the business model, it is as mentioned above.