{"id":13720,"date":"2026-01-30T02:05:51","date_gmt":"2026-01-30T02:05:51","guid":{"rendered":"https:\/\/diyhaven858.wasmer.app\/index.php\/a-yann-lecun-linked-startup-charts-a-new-path-to-agi\/"},"modified":"2026-01-30T02:05:51","modified_gmt":"2026-01-30T02:05:51","slug":"a-yann-lecun-linked-startup-charts-a-new-path-to-agi","status":"publish","type":"post","link":"https:\/\/diyhaven858.wasmer.app\/index.php\/a-yann-lecun-linked-startup-charts-a-new-path-to-agi\/","title":{"rendered":"A Yann LeCun\u2013Linked Startup Charts a New Path to AGI"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span class=\"lead-in-text-callout\">If you ask<\/span> Yann LeCun, Silicon Valley has a groupthink problem. Since leaving Meta in November, the researcher and AI luminary has taken aim at the orthodox view that large language models (LLMs) will get us to artificial general intelligence (AGI), the threshold where computers match or exceed human smarts. Everyone, he declared in a recent interview, has been \u201cLLM-pilled.\u201d<\/p>\n<p class=\"paywall\">On January 21, San Francisco\u2013based startup Logical Intelligence appointed LeCun to its board. Building on a theory conceived by LeCun two decades prior, the startup claims to have developed a different form of AI, better equipped to learn, reason, and self-correct.<\/p>\n<p class=\"paywall\">Logical Intelligence has developed what\u2019s known as an energy-based reasoning model (EBM). Whereas LLMs effectively predict the most likely next word in a sequence, EBMs absorb a set of parameters\u2014say, the rules to sudoku\u2014and complete a task within those confines. This method is supposed to eliminate mistakes and require far less compute, because there\u2019s less trial and error.<\/p>\n<p class=\"paywall\">The startup\u2019s debut model, Kona 1.0, can solve sudoku puzzles many times faster than the world\u2019s leading LLMs, despite the fact that it runs on just a single Nvidia H100 GPU, according to founder and CEO Eve Bodnia, in an interview with WIRED. (In this test, the LLMs are blocked from using coding capabilities that would allow them to \u201cbrute force\u201d the puzzle.)<\/p>\n<p class=\"paywall\">Logical Intelligence claims to be the first company to have built a working EBM, until now just a flight of academic fancy. The idea is for Kona to address thorny problems like optimizing energy grids or automating sophisticated manufacturing processes, in settings with no tolerance for error. \u201cNone of these tasks is associated with language. It\u2019s anything but language,\u201d says Bodnia.<\/p>\n<p class=\"paywall\">Bodnia expects Logical Intelligence to work closely with AMI Labs, a Paris-based startup recently launched by LeCun, which is developing yet another form of AI\u2014a so-called world model, meant to recognize physical dimensions, demonstrate persistent memory, and anticipate the outcomes of its actions. The road to AGI, Bodnia contends, begins with the layering of these different types of AI: LLMs will interface with humans in natural language, EBMs will take up reasoning tasks, while world models will help robots take action in 3D space.<\/p>\n<p class=\"paywall\">Bodnia spoke to WIRED over videoconference from her office in San Francisco this week. The following interview has been edited for clarity and length.<\/p>\n<p class=\"paywall\"><strong>WIRED: I should ask about Yann. Tell me about how you met, his part in steering research at Logical Intelligence, and what his role on the board will entail.<\/strong><\/p>\n<p class=\"paywall\"><strong>Bodnia:<\/strong> Yann has a lot of experience from the academic end as a professor at New York University, but he\u2019s been exposed to real industry through Meta and other collaborators for many, many years. He has seen both worlds.<\/p>\n<p class=\"paywall\">To us, he\u2019s the only expert in energy-based models and different kinds of associated architectures. When we started working on this EBM, he was the only person I could speak to. He helps our technical team to navigate certain directions. He\u2019s been very, very hands-on. Without Yann, I cannot imagine us scaling this fast.<\/p>\n<p class=\"paywall\"><strong>Yann is outspoken about the potential limitations of LLMs and which model architectures are most likely to bump AI research forward. Where do you stand?<\/strong><\/p>\n<p class=\"paywall\">LLMs are a big guessing game. That\u2019s why you need a lot of compute. You take a neural network, feed it pretty much all the garbage from the internet, and try to teach it how people communicate with each other.<\/p>\n<p class=\"paywall\">When you speak, your language is intelligent to me, but not because of the language. Language is a manifestation of whatever is in your brain. My reasoning happens in some sort of abstract space that I decode into language. I feel like people are trying to reverse engineer intelligence by mimicking intelligence.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you ask Yann LeCun, Silicon Valley has a groupthink problem. Since leaving Meta in November, the researcher and AI luminary has taken aim at the orthodox view that large language models (LLMs) will get us to artificial general intelligence (AGI), the threshold where computers match or exceed human smarts. Everyone, he declared in a [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":13721,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_daextam_enable_autolinks":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11],"tags":[],"class_list":["post-13720","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-news"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/diyhaven858.wasmer.app\/wp-content\/uploads\/2026\/01\/Model-Behavior-Inside-Logical-Intelligence-Business.jpg","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/13720","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/comments?post=13720"}],"version-history":[{"count":0,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/13720\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media\/13721"}],"wp:attachment":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media?parent=13720"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/categories?post=13720"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/tags?post=13720"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}