{"id":86810,"date":"2026-04-29T18:11:19","date_gmt":"2026-04-29T18:11:19","guid":{"rendered":"https:\/\/diyhaven858.wasmer.app\/index.php\/sanctioned-chinese-ai-firm-sensetime-releases-image-model-built-for-speed\/"},"modified":"2026-04-29T18:11:19","modified_gmt":"2026-04-29T18:11:19","slug":"sanctioned-chinese-ai-firm-sensetime-releases-image-model-built-for-speed","status":"publish","type":"post","link":"https:\/\/diyhaven858.wasmer.app\/index.php\/sanctioned-chinese-ai-firm-sensetime-releases-image-model-built-for-speed\/","title":{"rendered":"Sanctioned Chinese AI Firm SenseTime Releases Image Model Built for Speed"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p><span class=\"lead-in-text-callout\">SenseTime, a Chinese<\/span> AI company best known for its facial recognition technology, released a new open source model on Tuesday that it claims can both generate and interpret images far faster than top models developed by US competitors. SenseNova U1 could help the company reclaim lost ground after it slipped from its place among the leading players in China\u2019s AI development race.<\/p>\n<p class=\"paywall\">The model\u2019s secret sauce is its ability to \u201cread\u201d images without translating them to text first, speeding up the process and reducing the amount of computing power required. \u201cThe model\u2019s entire reasoning process is no longer limited to text. It can reason with images as well,\u201d Dahua Lin, cofounder and chief scientist at SenseTime, said in an interview with WIRED.<\/p>\n<p class=\"paywall\">Lin, who is also a professor of information engineering at the Chinese University of Hong Kong, says that models capable of processing images directly will enable robots to better understand the physical world in the future.<\/p>\n<p class=\"paywall\">Like DeepSeek&#8217;s latest flagship model, SenseTime says U1 can be powered by Chinese-made chips. \u201cSeveral Chinese domestic chipmakers have finished optimizing compatibility with our new model,\u201d Lin says. On release day, 10 Chinese chip designers, including Cambricon and Biren Technology, announced their hardware supports U1.<\/p>\n<p class=\"paywall\">That flexibility matters because US export controls restrict Chinese firms from accessing the world&#8217;s most advanced AI chips, particularly those used for training, which at this point are primarily developed by Western companies like Nvidia. \u201cWe will continue to push for training on more different chips,\u201d Lin says. But he also acknowledges that SenseTime \u201cmay still need to use the best chips to ensure the speed of our iteration.\u201d<\/p>\n<p class=\"paywall\">SenseTime released U1 for free on Hugging Face and GitHub, another sign of how Chinese companies are becoming some of the most active contributors to open source AI.<\/p>\n<p class=\"paywall\">SenseTime was founded in 2014 and became a world leader in computer vision, which is used in applications like facial recognition and autonomous driving. But when ChatGPT and other AI systems powered by natural language processing became the hottest thing in the tech industry, SenseTime began struggling to turn a profit and fell behind newer Chinese startups like DeepSeek and MiniMax.<\/p>\n<p class=\"paywall\">SenseTime says it hopes that releasing SenseNova-U1 publicly for anyone to use will help it catch up with both domestic and Western AI players. Lin says the company finally made the decision last year to focus on open source because of the helpful feedback it gets from researchers, which enables the company to iterate faster. \u201cIn this day and age, being open source or closed source is not the winning factor; the speed of iteration is,\u201d Lin explains.<\/p>\n<p class=\"paywall\">Going open source also helps SenseTime continue collaborating with international researchers without the interference of geopolitics. The company has been sanctioned repeatedly by the US government in recent years over allegations that its facial recognition technology helped power surveillance systems used to monitor and detain Uyghurs and other minority groups in China\u2019s Xinjiang region. As a result, US firms are restricted from investing in SenseTime and selling certain technologies to it without a license. (SenseTime has denied the allegations.)<\/p>\n<div class=\"GenericCalloutWrapper-loJzHJ fCTEYJ callout--has-top-border\" data-testid=\"GenericCallout\">\n<figure class=\"AssetEmbedWrapper-iJvQnD cOWUYC asset-embed\">\n<div class=\"AssetEmbedAssetContainer-fnduJP iaVSwI asset-embed__asset-container\"><span class=\"SpanWrapper-kFnjvc eKnjjD responsive-asset AssetEmbedResponsiveAsset-gaAbQ hXaxHA asset-embed__responsive-asset\"><picture class=\"ResponsiveImagePicture-jKunQM gjCCFj AssetEmbedResponsiveAsset-gaAbQ hXaxHA asset-embed__responsive-asset responsive-image\"><img decoding=\"async\" alt=\"Image may contain Mike He Yan Kuan Text Scoreboard Adult Person and Head\" loading=\"lazy\" class=\"ResponsiveImageContainer-dkeESL cQPiWi responsive-image__image\" srcset=\"https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_120,c_limit\/Zhang%20Mingyuan.jpeg 120w, https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_240,c_limit\/Zhang%20Mingyuan.jpeg 240w, https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_320,c_limit\/Zhang%20Mingyuan.jpeg 320w, https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_640,c_limit\/Zhang%20Mingyuan.jpeg 640w, https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_960,c_limit\/Zhang%20Mingyuan.jpeg 960w, https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_1280,c_limit\/Zhang%20Mingyuan.jpeg 1280w, https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_1600,c_limit\/Zhang%20Mingyuan.jpeg 1600w\" sizes=\"100vw\" src=\"https:\/\/media.wired.com\/photos\/69f14c25ee904208d78edef4\/master\/w_1600%2Cc_limit\/Zhang%2520Mingyuan.jpeg\"\/><\/picture><\/span><\/div>\n<div class=\"CaptionWrapper-bpPcvW iDPSlt caption AssetEmbedCaption-eZIMNW gMgneI asset-embed__caption\" data-testid=\"caption-wrapper\"><span class=\"BaseText-fEwdHD CaptionText-cQpRdU kRTNAB hbiMYj caption__text\"><\/p>\n<p>A sample image created using SenseNova U1. <strong>Generated using AI<\/strong><\/p>\n<p><\/span><span class=\"BaseText-fEwdHD CaptionCredit-cUgOGk iQbGEh hRFzlA caption__credit\">Courtesy of SenseTime<\/span><\/div>\n<\/figure>\n<\/div>\n<h2 class=\"paywall\">Seeing Clearly<\/h2>\n<p class=\"paywall\">In an accompanying technical report, SenseTime claims that SenseNova-U1 generates higher-quality images than all other open source models currently on the market. Its performance is comparable to leading Chinese closed source models like Alibaba\u2019s Qwen and ByteDance\u2019s Seedream, but it still lags behind industry leaders like GPT-Image-2.0, which came out just a week ago.<\/p>\n<p class=\"paywall\">But the model\u2019s main selling point is its ability to generate images much faster than all of those models. It relies on an innovative technical structure called NEO-Unify that SenseTime previewed earlier this year.<\/p>\n<\/div>\n<p><br \/>\n<br \/><a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>SenseTime, a Chinese AI company best known for its facial recognition technology, released a new open source model on Tuesday that it claims can both generate and interpret images far faster than top models developed by US competitors. SenseNova U1 could help the company reclaim lost ground after it slipped from its place among the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":86811,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_daextam_enable_autolinks":"","jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[11],"tags":[],"class_list":["post-86810","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-tech-news"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/diyhaven858.wasmer.app\/wp-content\/uploads\/2026\/04\/SenseTime-AI-Models-Chinese-Chips-Business-2154581485.jpg","jetpack_sharing_enabled":true,"jetpack-related-posts":[],"_links":{"self":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/86810","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/comments?post=86810"}],"version-history":[{"count":0,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/posts\/86810\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media\/86811"}],"wp:attachment":[{"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/media?parent=86810"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/categories?post=86810"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/diyhaven858.wasmer.app\/index.php\/wp-json\/wp\/v2\/tags?post=86810"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}