Remarkable 12
The core of China’s strategy lies in the establishment of ultra-competitive high-school programs designed to identify and cultivate students with exceptional potential in science and technology. These programs not only encourage academic excellence but also foster a collaborative environment where students can engage in cutting-edge research and development. The results of this initiative have already begun to manifest. Innovative projects and advanced technological solutions emerging from these talent streams are showcasing the ingenuity and creativity of China’s youth.
China’s AI Race: High-School Talent Fuels Innovation Leadership
Researchers in Sweden have identified an unexpected biological mechanism that could influence future cancer treatments. Scientists in Sweden have uncovered an unexpected anti-cancer effect from a molecule produced by the bacteria responsible for cholera. In a new study from Umeå University, researchers found that this bacterial toxin can slow the growth of colorectal tumors without causing measurable harm to healthy tissue. When administered throughout the body, the purified compound appeared to act selectively within tumors, altering immune activity in ways that may support long-term cancer control. “The substance not only kills cancer cells directly. It reshapes the tumor environment and helps the immune system to work against the tumor without damaging healthy tissue,” says Sun Nyunt Wai, professor at Umeå University and one of the lead authors behind the study.
Toxin Stops Colon Cancer Growth, Without Harming Healthy Tissue
To test this idea, the researchers combined self-directed internal speech, described as quiet “mumbling,” with a specialized working memory system. This approach allowed their AI models to learn more efficiently, adjust to unfamiliar situations, and handle multiple tasks at once. The results showed clear gains in flexibility and overall performance compared with systems that relied on memory alone.
AI that talks to itself learns faster and smarter
A research team in Japan discovered that tau proteins linked to Alzheimer’s first assemble into loose clusters before forming harmful fibrils. Inspired by polymer physics, the scientists showed these early clusters are reversible and can be dissolved by changing solution conditions.
Scientists Uncover a Hidden Early Stage of Alzheimer’s That They Can Stop
When people listen to a story, their brains do not process language all at once. Instead, meaning unfolds over time, with different regions contributing at different moments as words accumulate into phrases, sentences, and ideas. Now, a new study suggests that this temporal choreography inside the human brain closely resembles the internal step-by-step structure of modern artificial intelligence language models that power tools like ChatGPT.
The research, published in Nature Communications, reports that the layered architecture of large language models (LLMs) aligns with the timing of neural activity in human language areas during listening to natural speech. In effect, the deeper an AI model layer is, the later its activity matches what the brain is doing—suggesting a surprising convergence between biological language comprehension and machine learning systems trained only on text.
There is a new pattern hiding in plain sight across Big Tech. The companies that normally fight each other for every enterprise workload and every developer dollar are increasingly behaving like a coalition. Not a formal one. Not a press-conference one. A coalition of capital, compute, and distribution. And its purpose is simple: keep OpenAI strong enough, visible enough, and funded enough to prevent Google from becoming the default AI platform for consumers and, downstream, for the enterprise.
The Anti-Google Alliance: Why the Hyperscalers Are Backing OpenAI Like Their Businesses Depend on It
The phrase sounds like a meme, but it captures a real corporate mood shift: the moment a company tries a competing product internally, realizes it is genuinely better for a critical workflow, and suddenly cannot unsee what that implies about the market. That is why “Claude-pilled” matters here. At Davos, Nvidia CEO Jensen Huang did not casually compliment a rival model. He explicitly said that Nvidia uses Anthropic’s Claude widely across the company, especially for coding and reasoning tasks. When the CEO of the GPU empire says his workforce relies on a competing model for one of the highest value use cases in AI, that is not marketing fluff. It is an operational signal. And once that signal is sent, everything else around Nvidia’s relationship with OpenAI starts to look different.
Did Nvidia Get Claude-Pilled? - Neural Foundry Substack
The resulting aid cuts could cause more than 14 million additional deaths by 2030, according to a warning published by researchers in the Lancet medical journal last year.
Elon Musk will be deposed along with DOGE staffers over USAID dismantling
Lindsey Vonn wiped out in a downhill race on January 30. She got up limping, then was airlifted from the course. The diagnosis: a ruptured ACL — a season-ending injury for most. But the three-time Olympic medalist announced on Tuesday she would go on to compete in her fifth Games. For anyone who’s hobbled off the field, it’s hard not to ask: How? “It is a big deal to tear your ACL,” said Lindsey Lepley, an associate professor of athletic training at the University of Michigan. “And doing anything while being ACL-deficient is a big deal.” Vonn, 41, who is set to be the oldest Alpine skier to race at a Winter Olympics, has an extensive history of knee injuries and surgeries, including two prior ACL injuries.
How Lindsey Vonn can compete with a ruptured ACL
Claude Code does more than just code and is the best example of an AI Agent. You can interact with a computer with natural language to describe objectives and outcomes rather than implementation details. Provide Claude (the CLI) an input such as a spreadsheet, a codebase, a link to a webpage and then ask it to achieve an objective. It then makes a plan, verifies details, and then executes it. It is a glimpse of the future, but it is also here today in software already.
Claude Code is the Inflection Point
Coding was once the most valuable work of all, with programmers in hot demand during the 2020 era of software engineering. Coding is now a beachhead in terms of the disruption that agentic information processing has, and the larger 15 trillion-dollar information work economy is now at risk. There are 1b+ information workers, or roughly 1/3rd of the global 3.6 billion workforce per ILO. Every single workflow in the information work category is often similar and shares a workflow that Claude Code proves works for software. READ (ingest unstructured information), THINK (apply domain knowledge), WRITE (produce structured output) and then VERIFY (check against standards). This is large swathes of most information workers (including research!) and if Agents can eat software, what labor pool can they not touch? Our view is quite a few, and with the rise of Claude Code (and Cowork) the total addressable market of agents is much larger than just LLMs. Niche markets like customer support and software development will start to address the larger financial services, legal, consulting, and other industries. This is the core focus of the SemiAnalysis Tokenomics Model.
Claude Code is the Inflection Point
Enterprise software has easily been the first casualty of the great cost decline of intelligence. SaaS itself is just crystalized information processing of workflows into code. The three moats of SaaS, switching costs of data (data is trapped), workflow lock-in (learning the UI), and integration complexity (how Slack works with Jira) have all been partially eroded at the margins. The 75% gross margin of SaaS looks like a huge opportunity, as agents migrate data between systems with lessened migration costs, Agents themselves do not rely on human oriented workflows, and MCP integrations make integration much easier. Every aspect of SaaS is cheapening, and the margins have become the first opportunity of AI.
Claude Code is the Inflection Point
All the levers of economic stimulus in America are pushed to the maximum, setting the conditions for torrid overheating and an unstable boom by the end of the year.
The Trump boom is a high-stakes economic experiment
People who took suvorexant, a common treatment for insomnia, for two nights at a sleep clinic experienced a slight drop in amyloid-beta and tau, two proteins that pile up in Alzheimer’s disease. Related: Cancer May Emit Signals That Protect The Brain Against Alzheimer’s The trial was short and involved a small group of healthy adults, but the research – from Washington University in St. Louis – is an interesting demonstration of the link between sleep and the molecular markers of Alzheimer’s disease.
A Common Sleeping Pill May Reduce Buildup of Alzheimer’s Proteins, Study Reveals
OpenAI is actively exploring alternatives to Nvidia’s latest AI chips in certain use cases, according to sources familiar with the matter, reflecting growing performance and scaling challenges as demand for advanced AI services accelerates. Sources indicate OpenAI has been assessing non-Nvidia hardware options since last year, driven in part by dissatisfaction with the speed at which current-generation Nvidia accelerators can deliver responses to users for more complex queries. While Nvidia’s GPUs remain central to large-scale AI training and inference, the issue appears less about absolute capability and more about efficiency, latency and throughput as models grow larger and workloads become increasingly demanding.
OpenAI explores alternatives to Nvidia AI chips amid inference speed concerns | investingLive
The survey reveals what Informatica calls a “trust paradox” — and explains why data leaders are dangerously overconfident about AI readiness. Organizations deployed generative AI systems faster than they built the governance and training infrastructure to support them. The result: Employees generally trust the data powering AI systems, but organizations acknowledge their workforces lack the literacy to question that data or use AI responsibly. Seventy-five percent of data leaders say employees need upskilling in data literacy. Seventy-four percent require AI literacy training for day-to-day operations.
The trust paradox killing AI at scale: 76% of data leaders can’t govern what employees already use
What if your internet browser acted like a full-time employee — handling research, planning and execution for you? That’s exactly what OpenAI’s Atlas browser makes possible — and most people still aren’t using it. And I’ll show you how solopreneurs are already using it to reclaim 40+ hours a week. In this video, I’m breaking down eight plug-and-play use cases that solo entrepreneurs are using right now to scale toward six to seven figures and cut their workload in half:
ChatGPT’s New Internet Browser Can Run 80% of a 1-Person Business — No Tech Skills Required
Fortune 1000 brands waste billions every year on ineffective campaigns. RAD Intel’s award-winning AI technology helps them turn that chaos into clarity — using data-driven intelligence to create high-performing content that delivers measurable ROI. That’s why a who’s-who roster of global brands and agencies — across entertainment, healthcare, automotive, and lifestyle — rely on RAD Intel’s platform for precision marketing and influencer strategy.
However, a new study by researchers at the Institute of Neuroscience at the University of Oregon shows that you can speed up these processes by adding a third element to practice and feedback: passive exposure. The good news is that passive exposure requires minimal effort and is enjoyable. “Active learning of a... task requires both expending effort to perform the task and having access to feedback about task performance,” the study authors explained. “Passive exposure to sensory stimuli, on the other hand, is relatively effortless and does not require feedback about performance.”
Neuroscientists say a simple trick will help you learn any new skill a lot faster - Upworthy

