Celebrating the Power of Innovation: The 2025 PCMag Technical Excellence Awards | news.qlsh.net

Celebrating the Power of Innovation: The 2025 PCMag Technical Excellence Awards

Four decades ago, when the computing industry was in its infancy and AI was still largely the stuff of science fiction, the editors of PC Magazine sought to identify the people and technologies driving the nascent PC industry forward. With that simple notion, the PCMag Technical Excellence awards were born. The awards, first reported in the August 6, 1985 issue, were a big deal, with choreographed ceremonies followed by lavish galas where these true tech pioneers were celebrated. Some of the inaugural winners included the creators of Intel’s 16-bit 286 processor, which powered the first IBM PC. Early medals were bestowed upon groundbreaking products, such as Apple’s first LaserWriter printer and the Telebit 10,000kbps TrailBlazer modem. The Technical Excellence awards (“TechEx,” as they were familiarly known) were published annually for more than 25 years. (By the way, did you know that you can read issues from PC Magazine’s print archive on Google Books?)40 years later, the technology landscape looks vastly different, but there’s still plenty of ground to break. That’s why we’re excited to reboot TechEx in 2025, where AI is driving innovation toward exponential growth. Our 26 winners, carefully chosen by PCMag’s expert editors and writers, highlight cutting-edge design, engineering, or pure innovation in computing, electronics, connectivity, transportation, or artificial intelligence. From graphics and display advances to breakthroughs in computer-brain interfaces and autonomous vehicles, every one of them has made a splash this year by pushing the boundaries of what’s possible in the tech industry.—Wendy Sheehan DonnellComputing

Nvidia’s GeForce RTX 5080 graphics card with DLSS 4 capability (Credit: Zain bin Awais/PCMag Composite; Joseph Maldonado)

Nvidia DLSS 4 Turbocharging frame rates with AINvidia’s Deep Learning Super Sampling technology debuted in 2019, and DLSS is radically changing the computer graphics landscape once again. It isn’t just raising the bar for visuals—it’s lowering the barrier to entry for incredible gaming.With DLSS 4, released earlier this year, GPUs render graphics at a lower resolution to improve performance, but then increase the sharpness via machine-learning techniques for superior image quality. It also arms Nvidia’s latest GPUs with multi-frame generation (MFG), inserting multiple AI-generated frames between rendered ones to boost effective frame rates. Throughout Nvidia’s GeForce RTX 50-series GPU stack, DLSS 4 represents a major technical achievement, enabling its hardware to enhance players’ experiences across the price spectrum.Much of the discussion surrounding DLSS 4 focuses on high-end PCs that utilize upscaling to achieve fast frame rates at sharper resolutions, with effects such as ray tracing. Praiseworthy, but this was the case already, and competitors like AMD’s FSR provide similar upscaling. While enthusiasts are a key demographic, the reality is that most gamers couldn’t even turn on ray tracing or hit high frame rates in cutting-edge games—until now. With DLSS 4 and MFG, even a gamer with an entry-level Nvidia GeForce RTX 5050 can enjoy smooth frame rates and ray tracing across more than 100 modern titles.—Matthew Buzzi

(Credit: Zain bin Awais/PCMag Composite; AMD)

AMD Ryzen AI Max+ 395 (Strix Halo)A shape-shifting x86 powerhouseHow often do you see the same CPU in a compact desktop, a gaming tablet, and a mobile workstation? Almost never, we’ll bet. That’s until AMD’s Ryzen AI Max+ 395. Better known as “Strix Halo,” this highly efficient, potent SoC hits hard. That’s thanks, in part, to a design that allows the chip to access a sliding amount of main system memory as graphics memory—up to an impressive 96GB of a 128GB potential pool. Apple popularized this “unified memory” concept with its M-series Apple Silicon chips; here, the Max 395+ brings it to x86 and Linux. It’s less about enhancing gaming (though it helps for that) and more about accelerating creative workflows, running large AI models locally, and even powering chatbots.Based on AMD’s “Zen 5” architecture, Strix Halo is impressive: 16 CPU cores (with support for up to 32 concurrent threads) power productivity and content creation, while 40 GPU cores in its integrated Radeon 8060S produce true discrete-class graphics. Thanks to a neural processor onboard rated for 50 TOPS, it’s also a certified engine for Copilot+ PCs and the AI features that the Copilot+ platform enables.Check out the Max+ 395 in PCs as diverse as the Framework Desktop, the Asus ROG Flow Z13 tablet, and the HP ZBook Ultra G1a, as well as a host of mini-desktop designs from lesser-known makers. This pioneering CPU could be an x86 trailblazer for our AI-powered future.—John Burek

(Credit: Zain bin Awais/PCMag Composite; Nvidia)

Nvidia DGX Spark GB10An AI lab in a box The highly anticipated DGX Spark is a compact PC that houses the potential for tremendous AI power. At its core is Nvidia’s “Grace Blackwell” GB10 Superchip, a pioneering piece of silicon designed to integrate data-center architecture and petascale computing into a compact, single-system solution that can fit on any desk and run off a traditional wall outlet.This dramatic reduction in size and power consumption marks a watershed moment for accessible AI infrastructure, creating a fully fledged “AI lab in a box” for developers and enthusiasts. Leveraging Nvidia’s complete AI software stack, a single GB10 box is powerful enough to run a 200-billion-parameter model—a feat no single consumer or workstation GPU can claim.Furthermore, it can connect with an additional DGX Spark via its high-speed 200Gbps fabric for scaling up to even larger models and distributed applications. Whether used for development, fine-tuning, building AI agents, or running powerful models on premises, these compact units are laying the foundation for the next wave of AI development.How do we know? We’re seeing new GB10-based systems from major players, including Acer, Asus, Dell, Gigabyte, Lenovo, and MSI. Take a good look—these AI supercomputers are a new class of PCs pointed at the future.—Brian Westover

(Credit: Zain bin Awais/PCMag Composite; Joseph Maldonado)

CUDIMM Memory TechnologyFeeding the bandwidth beastThe PC industry’s need for bandwidth is never-ending—it’s essential for high-performance components. This year, a new class of system RAM, CUDIMMs (short for Clocked Unbuffered Dual In-line Memory Modules), debuted to meet that need. CUDIMMs have an added component, known as a clock driver, that regulates clock speeds to maintain stability at higher frequencies.”CUDIMM is a way to extend the supported speed range of DDR5, not just for today but into the future,” notes Jake Crimmins III, Corsair’s director of DRAM and memory engineering. “Overclocked UDIMMs are currently maxing out around 8,000MT per second, while CUDIMM is already pushing 9,000MT and above. And JEDEC (the standards body that governs PC memory) just updated the SPD Content standards to add support for DDR5-9200 modules in the future.”Currently, only Intel’s “Arrow Lake” platform supports CUDIMM on desktops; it has yet to reach AMD. While still evolving, CUDIMM memory employs standard DDR5 components and will also function like standard DDR5 sticks in motherboards that don’t support it. The memory is already being offered by key makers such as Corsair, Crucial/Micron, G.Skill, and Kingston. As a performance-boosting add-on to DDR5 memory rather than something entirely new, this sets it up for success as the demand for bandwidth continues to grow.—Michael Sexton

(Credit: Zain bin Awais/PCMag Composite; John Burek)

Asus Back to the Future (BTF) 2.5 A boon for meticulous PC buildersPC aesthetics have reached their apex: It’s now possible to eliminate almost all visible cables inside your case. (Mind you, they’re not truly gone, just increasingly well-hidden.) Asus’ Back to the Future (BTF) ecosystem, alongside MSI’s similar Project Zero, has rewired the build playbook by hiding all that messy component cabling behind your motherboard, with reverse-facing cable headers and sockets.The major hurdle to a truly cable-free PC, however, has always been the graphics card. Eliminating its wiring required, in Asus’ earlier BTF designs, a specialized card with a bottom-mounted power connector in place of power cables—and that solution worked with only a few motherboards.The first complete bridge between old-school and “cable-free” builds, BTF 2.5, changes the game. New compliant graphics cards, like the Asus ROG Astral GeForce RTX 5090 BTF Edition, come with a “GC-HPWR” power module on the underside that pops on or off, so you can run these cards in a full BTF setup or in a standard PC. More cards are coming, and the ecosystem also encompasses motherboards, PC chassis, and other gear.—JB

(Credit: Zain bin Awais/PCMag Composite; Thomas Soderstrom)

Corsair Air 5400Cool and clever air diversion designPC Thermal innovations may seem destined to be forever incremental, so we have to hand it to Corsair for its cool-running Air 5400. This desktop tower chassis features a unique third chamber design that enables fully isolated CPU cooling.The Air 5400 looks like any of a dozen aquarium-style PC cases on its glassy side. Look at it from the right, however, and you’ll see a vertical gap running up and down the right panel with a diversionary air dam, positioned directly behind the front-panel radiator mount. Install a 360mm radiator up front (presumably routed to your CPU), and its fans will draw cool air from the outside, push it through the rad, and ventilate it straight out of the case. The separated-airflow scheme also lets you cool hot-running mainboard components or the graphics card in their own zones.The concept isn’t 100% new. HP’s Omen 45L ATX and its Cryo-Chamber design (a radiator zone suspended above the case) is similar. However, Corsair deserves props for incorporating the air diversion directly into the chassis, and doing it elegantly enough to make it a showy part of the design. With a Corsair 360mm radiator installed up front, the case delivered a 10 degree C drop in CPU temps in our tests.—JB

Google’s quantum chip, Willow (Credit: Zain bin Awais/PCMag Composite; Google)

Google Willow Quantum ComputingA significant leap in quantumFive years ago, we argued that massive investments in quantum computing research were creating a bubble. Well, that bubble has yet to burst. Innovators continue to build quantum computers at a breakneck pace, hoping to soon see the day when quantum computing becomes useful for mainstream tasks. (Some say that it will take years, while others say decades.)These builders are trying to simultaneously increase the number of quantum bits (or qubits) in a computer while preventing that computer from making mistakes. Qubits are far more powerful than the binary bits in your laptop, which exist in either an open or closed state (i.e., zeros or ones). Instead, qubits can be “entangled” or exist in multiple states simultaneously, a phenomenon known as a superposition. But increasing the number of qubits usually results in computers that make so many errors that they’re useless. Google’s Willow is different in that it can reduce errors exponentially as more qubits are added. “This cracks a key challenge in quantum error correction that the field has pursued for almost 30 years,” says Hartmut Neven, Google Quantum AI founder and lead. The company plans to use Willow to address real-world, relevant problems that are intractable for classic computers to solve. We hope it succeeds.—Tom Brant

Synchron Stentrode model at the company’s Brooklyn headquarters (Credit: Zain bin Awais/PCMag Composite; Joseph Maldonado/Andriy Onufriyenko/Getty Images)

Synchron Stentrode Brain-Computer InterfaceSmart new connections for brain-implant patientsWhat if you could plug your brain into your tablet, just like a keyboard or mouse, and control it with only your thoughts? Thanks to Synchron, that future is already here. In May, the company announced a partnership with Apple that allows people with Synchron’s brain implant to pair it via Bluetooth with their iPhone, Apple iPad, or Apple Vision Pro headset. Once connected, patients can send messages, make calls, browse the web, and more—simply by thinking about it.At the heart of this breakthrough is the Stentrode implant, which records the brain’s electrical signals while complex data science translates those signals into digital commands, seamlessly turning thought into action.For those with life-altering conditions like ALS, this is nothing short of transformational. Many patients who have lost the ability to move their limbs—and with it, their independence—can now reclaim some control over daily life and communication with loved ones.Another part of Stentrode’s genius is its minimally invasive surgical procedure. Unlike Elon Musk’s Neuralink, implanting it does not require drilling into the skull. Instead, the Stentrode is placed in a blood vessel near the brain, which is a simpler and less risky proposition. Could this be the moment when brain-computer interfaces hit the mainstream? It’s too soon to tell, but for now, Synchron is all about restoring agency to people who need it most.—Emily ForliniConsumer Electronics

(Credit: Zain bin Awais/PCMag Composite; Will Greenwald)

Hisense 116UX RGB-LED Backlight SystemSetting a new brightness barProducing the brightest and most colorful picture we’ve ever measured, RGB-LED backlighting signals a major advancement in television display quality. RGB-LED uses clusters of colored LEDs, rather than just white or blue, to illuminate an LCD panel and enhance the colors it can display. Each colored LED can be brightened or dimmed based on what the pixels in front of it are showing, pushing the range of color you see past what the LCD alone can display.Earlier this year, Hisense was the first to bring this technology to consumers with its 116UX, and even this early iteration is impressive. With continued development to improve its capabilities, RGB-LED could easily rival OLED panels for top-of-the-line TVs and succeed where micro-LED has struggled to gain traction. Of course, cutting-edge comes at a cost: The 116-inch 116UX will set you back around $30,000, while the “smaller” 100-inch model goes for $15,000. For now, these dazzling displays will be out of reach for most, but they offer a clear glimpse of the future.—Will Greenwald

(Credit: Zain bin Awais/PCMag Composite; Joseph Maldonado)

Samsung Galaxy XR & Android XR Mixed Reality PlatformOne step closer to mixed reality for allMixed reality (XR) blends the fully immersive world of virtual reality with the real one, allowing you to build and interact with digital experiences anchored in your physical surroundings. The Android XR-based Samsung Galaxy XR is a significant step forward in this space. The Galaxy XR is an immersive headset that allows you to use software as floating objects in your own surroundings or drop you into an entirely new environment, controlling the entire experience with your eyes and hands using sophisticated external and internal cameras and other sensors. Much of this territory was charted first by Apple, but the Galaxy XR is more than just Samsung’s take on the Vision Pro—it’s shaping up to be the blueprint for Android-based XR. Unlike Apple’s closed visionOS, this is a much more open ecosystem. Apps built for the Galaxy XR will be compatible with future Android XR headsets, including smart glasses and other devices on the platform, thanks to built-in development support. And at $1,800, it costs about half as much as Apple’s headset, making cutting-edge mixed reality more accessible than ever.—WG

(Credit: Zain bin Awais/PCMag Composite; Will Greenwald)

Meta Ray-Ban Display’s Color Waveguide Display & Neural Band ControllerSmart glasses that finally feel legitLightweight and wireless, smart glasses have come a long way in the past couple of years, but Meta’s Ray-Ban Display marks a true breakthrough.Featuring a 600-by-600 full-color waveguide display, they leave behind the clunky, monochrome screens of earlier models. You can view photos, take video calls, read live captions during conversations, and see maps—all while maintaining a clear view of your surroundings through the lenses. Similar lightweight, wireless, display-equipped smart glasses I’ve tested in the past have had green-only displays, so color is a major improvement.They’re paired with a wrist-worn Neural Band controller that reads subtle muscle movements, letting you navigate apps with simple finger pinches and swipes. While the gestures aren’t flawless, the interface is the most intuitive I’ve used. Like their displays, the menu systems and controls on earlier waveguide smart glasses have been much more awkward and often felt incomplete. Between the hardware and interface improvements, Meta’s latest take on smart glasses feels like the most refined and advanced of its kind.—WG

(Credit: Zain bin Awais/PCMag Composite; Jim Fisher)

Sony FE 50-150mm F2 GM Optical DesignThe first full-frame F2 telezoomSony’s FE 50-150mm F2 GM is a TechEx winner not because of any single innovation, but rather the sum of its parts. It doesn’t look any different than other professional lenses. It’s what’s on the inside that counts. Sony showcases its optical design expertise with an 18-element/15-group formula that includes an Extreme Aspheric (XA) element, designed to capture images with edge-to-edge clarity and soft, defocused backgrounds. Sony’s manufacturing process molds the XA element with 0.01-micron precision, completely eliminating the concentric onion-skin specular highlights that most lenses with aspheric glass exhibit—it’s a subtle differentiator, but one that discerning photographers appreciate.The lens uses dual magnetic drive XD focus motors to keep pace with Sony’s fastest cameras, ensuring accurate results at up to 120fps for both still images and movies. As a result, it’s just as suitable for action and cinema as it is for portraiture. Amazingly, the zoom weighs just 3 pounds, not much more than the average 70-200mm F2.8, while gathering twice the light. It’s a culmination of years of optical development and is peerless—no other zoom matches its mix of telephoto reach and F2 optics.—Jim Fisher

(Credit: Zain bin Awais/PCMag Composite; Tonal)

Tonal 2 Smart View AI-Powered Fitness CoachingNailing your perfect formBesides the actual work, one of the hardest parts of working out is mastering your form—especially when you’re exercising at home. Tonal’s second-gen smart strength training machine is a leap forward for AI-powered personal training, incorporating several form feedback technologies to rival a human coach.Its integrated Smart View feature uses a camera and machine learning to track 32 body points to determine whether you’re performing moves correctly. The machine also analyzes approximately 50 data points per second from its cables to evaluate your range of motion, symmetry, and pace, and even spot you when it senses you need assistance, surpassing the capabilities of Peloton and Tempo. When you make mistakes, it uses generative audio to provide real-time corrections, and it visually records your errors for you to review later. The company has trained its AI on proper technique under the guidance of a team of PhDs in physiology and kinesiology. “Everything we do is science-backed,” Tonal’s chief product officer, Jonathan Shottan, says. “We don’t want to injure you as we make you stronger.” 

Get Our Best Stories!

Your Daily Dose of Our Top Tech News

Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.

Sign up for our What’s New Now newsletter to receive the latest news, best new products, and expert advice from the editors of PCMag.

By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!

In my testing, the Tonal 2’s virtual coaching helped me improve my form for racked squats, work to my full range of motion for racked offset split squats with rotation, and optimize my setup for chest press and barbell deadlift.—Angela Moscaritolo

(Credit: Zain bin Awais/PCMag Composite; Andrew Gebhart)

Eufy MarswalkerThe key to a true hands-off home cleaning experienceBreaking down one of the barriers to complete home automation, the Eufy Marswalker solves a simple physics problem. This cleverly designed attachment lifts your robot vacuum up and down flights of stairs, enabling it to clean your entire multi-level home without requiring your intervention. See? It’s simple, but somehow revolutionary. After all, stairs have been the Achilles’ heel of robot vacuums since their inception.The Marswalker sort of resembles a NASA rover. It has a hollow center, and when it’s time to hit the steps, it flips open, deploying a ramp on one side that allows the robot vacuum to maneuver into its central compartment. With the vacuum secured inside, the Marswalker extends four long plastic arms from each corner, helping it feel for and pivot up or down the stairs. The arms flatten out when the robot is climbing or descending, allowing it to move smoothly. The long, tank-like treads on the bottom help it grip the surface of the steps.While launching earlier this year as an add-on for the Eufy RoboVac Omni S2, we expect the technology to proliferate within Eufy’s lineup and likely inspire competitors to follow.—Andrew Gebhart

(Credit: Zain bin Awais/PCMag Composite; Iyaz Akhtar)

Google Pixel 10 Voice TranslateOn-the-fly translation that sounds like youUnveiled alongside the Pixel 10 in August, Voice Translate brings real-time language translation to your live phone calls. The concept isn’t new, but Google’s approach is—a seamless blend of live, in-call, on-device processing with translation that actually mimics each speaker’s voice. The result? Two people who speak different languages can hold a natural conversation without missing a beat. A boon for international travelers.Voice Translate currently supports 10 languages. The caller needs a Pixel 10 and must download the relevant language pack beforehand, but the person on the other end can use any phone. Once the call starts, the magic kicks in: They’ll hear the translation in the caller’s own voice—an effect that feels like something out of science fiction. The feature also provides live transcriptions on both sides, so each person can read and hear the conversation in their own language.Because the translation happens entirely on the device, it’s both fast and secure. Voice Translate works with the phone’s speaker or Google Pixel Buds.Apple later introduced a similar Live Translation feature with the Apple iPhone 17 and the AirPods Pro 3, though it doesn’t mimic speakers’ voices.—Eric ZemanConnectivity & Transportation

(Credit: Zain bin Awais/PCMag Composite; Michael Kan)

T-Mobile T-SatelliteNo signal? No problem A satellite orbiting more than 200 miles above your head can now keep you connected—letting you text, chat, and even jump on a video call from places where your phone previously showed zero bars.Apple cracked open the door for satellite connectivity in 2022 with its Emergency SOS feature. But this year, T-Mobile and SpaceX blew it wide open with T-Satellite, an ambitious service designed to erase cellular dead zones across the US. Powered by a constellation of 650 Starlink satellites acting as orbiting cell towers, the system turns the sky itself into a network.We were seriously impressed by the results in our tests on a remote beach in Northern California. Using T-Satellite, we sent texts, scrolled through X, and even joined WhatsApp video calls—all from a stretch of land that was once a dead zone. Signal quality isn’t perfect yet, but performance is only expected to improve. Especially after SpaceX recently acquired valuable radio spectrum from Boost Mobile’s parent company, EchoStar.T-Satellite is also kicking off a new space race. AT&T and Verizon are lining up their own competing service through AST SpaceMobile, while Apple plans to beef up its offering through its partner Globalstar.—Michael Kan

(Credit: Zain bin Awais/PCMag Composite; Eric Zeman)

Apple Watch Ultra 3 Satellite ConnectivityA lifesaver on your wristThis year’s Apple Watch Ultra 3 introduces several upgrades, but its satellite connectivity feature undoubtedly stands out. This potentially life-saving feature lets you call for help even when you’re, say, deep in the wilderness, far from cellular or Wi-Fi coverage. The built-in antenna had to be compact enough to fit inside a smartwatch, yet powerful enough to connect with satellites orbiting 800 miles above Earth—an impressive engineering achievement.How does it work? When you’re off the grid, you’ll see a satellite icon pop up at the top of the watch face. Tapping it pulls up a menu that allows you to connect to a satellite, then make an emergency call, send a text to a recent contact, or share your location with loved ones.While Apple wins our award for technical excellence, Google deserves honorable mention for launching satellite communication shortly after on the Pixel Watch 4. Google’s implementation is limited to SOS calls only, however. Regardless, the feature is an outstanding safety addition for smartwatches.—AG

Starlink Beam Switching illustration (Credit: Zain bin Awais/PCMag Composite; SpaceX)

Starlink Beam Switching TechnologySeamless satellite signalsStarlink’s website lists the following instructions for getting online with satellite internet: “(1) Plug it in (2) Point it at the sky.” It’s that simple, given the introduction of Starlink beam switching, a technology that has further enhanced Starlink’s already sophisticated Earth-to-space beam-forming technology.Launched in July, beam switching enables a single Starlink dish to communicate with multiple satellites simultaneously. As one satellite drifts out of range and another moves in, your dish now predicts the handoff and switches seamlessly, keeping your connection locked in without interruption. No dropouts. No buffering. Just continuous high-speed internet—no matter what’s passing overhead.Beam switching is only possible due to a combination of Starlink’s phased array antenna beam-forming tech, terminal and network software improvements, and the sheer number of low-Earth-orbit (LEO) satellites that SpaceX has in its growing constellation overhead. The impact is felt where it counts most: more resilient connections, reduced latency, and an installation process that’s more straightforward than ever.—BW

(Credit: Zain bin Awais/PCMag Composite; Waymo)

Waymo Autonomous VehiclesSmarter self-driving cars paving the wayWaymo’s self-driving electric cars are revolutionizing American roads, smashing societal norms, and opening up new possibilities for the future of transportation. This year, Waymo has expanded to more people and places. It’s now taking paid rides in more US cities than any other company, operating in Atlanta, Austin, LA, Phoenix, and San Francisco. It also added Silicon Valley, expanded its LA footprint, and added its service to the Uber app in Austin and Atlanta. This is the first time Uber has managed a fleet of Waymo vehicles, another big step in making self-driving rides available to the masses.The more cities, weather conditions, and unique routes these vehicles take on, the more the Waymo Driver software learns and improves. Along with that software, the system uses 29 cameras, as well as radar and LiDAR, to provide a 360-degree view of the road, pedestrians, and bikers. Its approach differs from Tesla’s, whose robotaxis rely only on cameras and are currently limited to Austin. In 2026, Waymo will debut its sixth-generation software platform, promising improved technical capabilities at a lower cost. It’s also expanding service to Dallas, Miami, and Washington, DC. Meanwhile, it is being tested in New York City and will soon start in London and at San Francisco International Airport. Without any close competitors or major accidents so far, the self-driving taxi game is Waymo’s to win.—EFArtificial Intelligence

(Credit: Zain bin Awais/PCMag Composite; René Ramos/OpenAI)

Open AI ChatGPT Deep ResearchAI-driven research with robust sourcingThe web is wide, deep, and often murky. Navigating it to clearly and fully understand a given topic can sometimes feel like an impossible dive. If only you had a skilled research assistant…Launched in February, ChatGPT’s Deep Research feature leverages the web to condense information and generate thoughtful, thorough reports on any topic imaginable, complete with dozens of sources and in-text citations. As Isa Fulford, research lead for deep research at OpenAI, puts it, this is the company’s “first widely used agent capable of reasoning for extended periods to synthesize information and produce research and analysis.”This claim holds up in our testing, with the research tool generating detailed reports on topics such as selecting the ideal showerhead or optimizing character builds in a video game. In an era when AI can easily spread misinformation and cause misunderstandings, the depth and accuracy of ChatGPT’s Deep Research stand out. Google Gemini came first with its own deep research mode, but ChatGPT’s implementation offers more robust sourcing, strengthening the connection between claims and evidence. It also asks smart follow-up questions before diving into a topic, helping to ensure sharper, more relevant results. After a variety of upgrades and tweaks to make it more accessible, including for free users, deep research is in its best state ever.—Ruben CircelliEditors’ Note: Ziff Davis, PCMag’s parent company, filed a lawsuit against OpenAI in April 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

(Credit: Zain bin Awais/PCMag Composite; shoaib majeed/Tatiana Pogorelova/via Getty Images)

Google Gemini Nano BananaFast, free, incredibly capable image editingIn August, Google introduced Nano Banana, its free AI image editing tool based on Gemini’s 2.5 Flash model. AI chatbot image editing (and even AI tools in full-blown image editing apps) often struggle to avoid uncanny errors and distortion, or require many generations to produce something usable. Nano Banana, on the other hand, effortlessly edits images in mere seconds based on your prompts and usually gets things right on the first try. Your requests can be simple, such as removing a troublesome object, or advanced, like applying a custom creative filter. The results can be somewhat blurry and low resolution compared with the original image you upload, but we expect those issues to become less prominent as the technology develops. For now, Nano Banana stands out for its ease of use and versatility, as well as for how it avoids making your images look like awkward AI generations. The technology effectively democratizes image editing for the average person. It’s one of the best AI image editing packages we’ve seen, it’s free, and it’s only becoming more accessible as it shows up in other Google apps.—RC

(Credit: Zain bin Awais/PCMag Composite; Apple App Store)

Open AI Sora 2Next-level AI video generationEqual parts scary, dystopian, and, somehow, fun, OpenAI’s latest Sora version takes deepfake technology mainstream, letting you generate hyperrealistic AI videos of yourself, your friends, or even dead celebrities. AI video generation isn’t new, but Sora takes it to another technical level. It can mimic not just faces, but also voices, natural body movements, and dynamic camera angles. The results are often so lifelike that many clips are nearly indistinguishable from real footage. I couldn’t help but laugh as I made realistic-looking videos of OpenAI’s CEO, Sam Altman, endorsing PCMag while accepting a bag of cash, or telling us to shut up about the AI bubble.Sora feels like Pandora’s box come to life, posing as both a marvel and a menace. Could it one day rival TikTok in popularity? Or just as easily unleash a new wave of viral misinformation? Maybe it’ll reinvent entertainment itself, generating endless spinoffs of your favorite TV shows. For now, many Sora users are urging OpenAI to loosen the app’s content restrictions.—MK

(Credit: Zain bin Awais/PCMag Composite; Microsoft)

Microsoft Copilot Vision With HighlightsAI that sees, guides, and talks you through itCopilot Vision with Highlights in Windows 11 represents a significant step forward in the field of AI-powered assistance, potentially revolutionizing PC support. Activated via voice or keyboard on your Windows 11 PC, Copilot Vision can converse with you about anything you see on your screen, move a pointer to the place that requires attention, and highlight it, all while verbally telling you what you need to do. No more digging through help pages, watching tutorial videos, or dialing up your geeky pals. Vision with Highlights can also guide you through a complex edit in Photoshop, provide hints for the next step in a game, or describe and provide background information on any image or text on your screen by tapping into its vast knowledge base for matches and formulating responses in a lifelike tone. That’s impressive on several technical levels, requiring real-time screen analysis, contextualization, rendering, and natural language processing.Similar AI tools from Apple and Google only operate in prescribed contexts and lack the guided on-screen navigation element, putting Microsoft well ahead of the pack.—Michael Muchmore

(Credit: Zain bin Awais/PCMag Composite; Perplexity)

Perplexity Comet BrowserA new orbit for web exploration For better or worse, AI changed the way we interact with the internet this year. From search to agentic tools and automated tasks, Perplexity’s Comet browser stands at the forefront. Comet isn’t the first combination of AI and web browsing—Copilot features in Edge and in-browser Gemini in Chrome came before it—nor is it the latest, with new capabilities coming to browsers like OpenAI’s Atlas, Opera Neon, and newcomers like The Browser Company’s Dia.However, Comet stands out by shifting from AI tools bolted onto an existing browser to an AI-first experience, in which Perplexity’s AI search and multiple model tools take center stage. Comet runs on Google’s Chromium technology, but it’s more than “just” a browser; it’s an AI assistant that happens to navigate the web. That single feature grants the ability to do things like ask questions about a video as you watch it, then jump to another page on a different tab, and continue the conversation as you dig into a related reference, and then keep going in a third tab as you turn those summarized insights into action—a task list, an essay answer, a travel plan, or whatever you want. It’s a whole new way to dive into the web, from AI-powered search to agentic browsing that lets you prompt for actions on a web page, or even across tabs with Comet’s sidecar AI assistant.—BW

(Credit: Zain bin Awais/PCMag Composite; James Martin)

Anthropic Claude CodeReshaping how software gets builtFirst launched in February as a research preview, Anthropic’s Claude Code has revolutionized software engineering. This always-available AI “agent” resides within the programmer’s terminal and on the web, acting as a quasi-coworker. Claude Code runs on Anthropic’s AI models. It turns engineers into managers of their own projects who direct the AI to complete large portions of it for their final review. Claude Code can write code, fix bugs, update files, perform code reviews, and more—all in parallel, based on simple prompts. It can even create tasks in project management tools like Asana. “Whatever tools you use as an engineer, Claude Code can use,” Head of Claude Code Boris Cherny tells us in an interview. Cherny accidentally created Claude Code while exploring the potential of Anthropic’s AI models. He then gave it to the company’s engineers, who swiftly adopted it. Its subsequent public release lit a fire under competitors, such as OpenAI, which followed with its own version. Thus began the vibe coding craze of 2025, which broke Silicon Valley’s traditional coding approach and dramatically accelerated the time to build new technology.Claude Code is now a daily staple at major companies, including Deloitte, IBM, Salesforce, and Uber. Anthropic continues to develop it, and is working on perfecting how multiple models work together, like a team of coworkers (or “Claudes”) who handle tasks together, independently, for hours on end without human input.—EF

(Credit: Zain bin Awais/PCMag Composite; Sesame AI)

Sesame AI Voice CompanionsAI you’ll want to talk toArtificial voices have come a long way since the early days of Alexa, Cortana, and Siri, when they were stiff and robotic. The rise of generative AI has ushered in a new era of hyper-realistic vocal synthesis—and Maya and Miles, Sesame’s AI voice companions, are proof. These two sound so real, it’s almost scary. They pause, sigh, laugh softly, and nail the subtleties of human speech to near perfection.What makes Maya and Miles unique isn’t just their realistic voices and mannerisms—it’s how easy they are to talk to. Most AI voice tools are still glorified text-to-speech machines, whereas Sesame’s LLM-powered chat enables conversations to flow naturally on virtually any topic in real-time. This makes Sesame extremely approachable for anybody who wants to have a casual conversation with an AI. When smart language meets believable voices, the result is something way more personal than a chatbot and far more engaging than a virtual assistant.—RCWhat’s on your list of the best technological innovations in 2025? Let us know in the comments below.

About Our Expert


已发布: 2025-10-29 13:01:00

来源: www.pcmag.com