シオン
@shion
OpenAI Rules the Changes But Meta Changes the Ruleshttps://www.thealgorithmicbridge.com/p/openai-rules-the-changes-but-meta

OpenAI Rules the Changes But Meta Changes the Rules

Meta's open-access Llama 3 model is changing the AI game, threatening OpenAI's dominance. But can OpenAI's rumored GPT-5 masterpiece save the day?

シオン
@shion 1 month ago
126

Post a new comment...

CorbDium

1 month ago

Meta's open-source contributions, such as React and PyTorch, don't compromise their business model, which is primarily driven by advertising. In contrast, OpenAI's business model is still uncertain, making it crucial for them to weigh the potential risks of open-sourcing their projects. Meanwhile, it's intriguing to wonder why Google hasn't followed Meta's lead in open-source development.

Reply

34

musslermag1c

1 month ago

Google's inconsistency in seeing projects through to completion makes it difficult to rely on them, whereas Meta has a track record of producing impactful and lasting contributions, such as React and PyTorch, which have resonated significantly in the frontend and deep learning communities.

Reply

20

KaylaWrites

1 month ago

We're on a mission to create a utopia where everyone conforms to our ideals and uses our solutions, and we won't let anyone get in the way.

Reply

19

塔尼娅·艾琳·里斯

1 month ago

Since all models are trained on the same dataset, they'll ultimately converge to the same outcome, which is why Meta is shifting its focus away from Llama, as it's likely to reach OpenAI's level of proficiency within a short timeframe of 1-2 years or even less.

Reply

18

zonze_zone

1 month ago

This sounds like a classic example of the "commoditize your complement" strategy, where one company makes a product or service free or cheap to increase the value of something else they offer.

Reply

17

LaborNation

1 month ago

It seems like Meta is adopting a retaliatory approach, aiming to dismantle OpenAI's sustainable business model in the long run.

Reply

8

Related

Laurence Tratt: What Factors Explain the Nature of Software?

A triad of interacting factors that define the nature of software:

1. Liminal state: Software occupies a state between the constraints of the physical world and the fantasy world of unlimited possibilities. This leads to a mix of hard and soft constraints, making it difficult to determine what is possible and what is not. 2. Circular specification problem: It is impossible to fully specify software before building it, as the act of creating software is also an act of specification. This leads to gaps between our ideas and the reality of the software. 3. Observer effect: The act of observing software in action changes what we think the software should be. This leads to changes in requirements and specifications, and can result in extra work and friction.

tratt.net

Elon Musk's xAI nears $10 bln deal to rent Oracle's AI servers, The Information reports

Elon Musk's artificial intelligence startup xAI is in talks with Oracle to rent cloud servers for $10 billion over several years, making xAI one of Oracle's largest customers. The deal would help Musk rival AI offerings from OpenAI and Google, and comes as xAI prepares to launch an enhanced version of its chatbot Grok. Oracle's co-founder Larry Ellison is a close friend of Musk, and xAI is already a major user of Oracle's cloud technology, using over 15,000 of its AI chips.

www.reuters.com

GPT-4o’s Memory Breakthrough! (NIAN code)

The article discusses a new benchmark called "Needle in a Needlestack" (NIAN) that measures how well large language models (LLMs) pay attention to information in their context window. The benchmark involves a prompt with thousands of limericks and asks a question about a specific limerick at a certain location. The author tests various LLMs, including GPT-4 Turbo, Claude-3 Sonnet, GPT-4o, Mistral's models, and GPT-3.5-turbo, on this benchmark.

The results show that:

* GPT-4o performs almost perfectly on the benchmark, a significant breakthrough. * GPT-4 Turbo and Claude-3 Sonnet struggle with the benchmark. * Mistral's models, including the 8x22 model and Mistral large, perform poorly, with accuracy ranging from 50% to 70%. * Shorter prompts improve the performance of models, as seen with Mistral 7b. * Repeating information, such as repeating the limerick 10 times, can significantly improve performance, as seen with GPT-3.5-turbo.

nian.llmonpy.ai