Postlar filtri


It would be unwise for Ukraine to agree to Trump's peace plan.

There are no security guarantees, and Trump refuses to pressure Russia while threatening Ukraine. In fact, he has refused to sell Ukraine air defense systems, even though Kyiv is prepared to cover the cost. Trump's pro-Russian stance goes as far as unwinding efforts to investigate Russian war crimes and forcing Kyiv to recognize territories that not even China recognizes as Russian.

In other words, Ukraine would not benefit from such a deal. In fact, Ukraine would be extorted for its resources by a government now clearly allied with its mortal enemy. Meanwhile, the tens of thousands of Ukrainian children who were forcibly transferred—or effectively kidnapped—to Russia will remain there and be brainwashed to hate their own country.

Such an arrangement would likely lead to several detrimental outcomes:

1. Russia would use the sanctions relief to rebuild its military capacity for a much bigger attack in the future.

2. Fearing this renewed attempt to destroy Ukraine, millions would flee. Because of this threat, necessary investments would not be made.

3. Russian interference in Ukrainian politics would undermine national cohesion.

Currently, Ukraine is in a much stronger position than it will be in a few years, when Russia has had time to prepare for a renewed assault.

This consideration extends to European security interests as well, as the conflict currently constrains Russian resources that might otherwise be directed toward other European nations. Every Russian soldier killed now is one less that Europeans will have to fight in the future.

241 0 4 21 24

Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
Physical Intelligence got a robot to clean up homes that were never seen in its training data! Their new model, π-0.5, aims to tackle open-world generalization.

Physical Intelligence took their robot into homes that were not in the training data and asked it to clean kitchens and bedrooms.

Read more: https://www.pi.website/blog/pi05


The time horizons of coding agents are doubling every ~4 months.

Read more: https://theaidigest.org/time-horizons

499 0 8 12 10

Links for 2025-04-22

AI


1. AI assisted search-based research actually works now https://simonwillison.net/2025/Apr/21/ai-assisted-search/

2. Pre-training isn't dead, it’s just resting https://tmychow.substack.com/p/pre-training-isnt-dead-its-just-resting

3. Questions about the Future of AI: Considerations about economics, history, training, deployment, investment, and more https://www.dwarkesh.com/p/questions-about-ai

4. Demis Hassabis discusses AGI, AI in medicine, and DeepMind’s groundbreaking advancements in robotics and reasoning. https://www.cbsnews.com/video/demis-hassabis-ai-deepmind-60-minutes-video-2025-04-20/

5. AgentA/B: Automated and Scalable Web A/BTesting with Interactive LLM Agents https://arxiv.org/abs/2504.09723

6. Let Me Grok for You: Accelerating Grokking via Embedding Transfer from a Weaker Model https://arxiv.org/abs/2504.13292

7. Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models https://nvlabs.github.io/EAGLE/

8. A Deep Peek into DeepSeek AI’s Talent and Implications for US Innovation https://www.hoover.org/research/deep-peek-deepseek-ais-talent-and-implications-us-innovation

Brains


1. Each of the Brain’s Neurons Is Like Multiple Computers Running in Parallel https://singularityhub.com/2025/04/21/each-of-the-brains-neurons-is-like-multiple-computers-running-in-parallel/

2. “It looks like most mammals, at least most large mammals, have the brains they need, while primates, especially large primates, have the brains they can afford.” https://logarithmichistory.wordpress.com/2024/04/19/ground-up-monkey-brains-8/

3. Geoffrey Hinton says the more we understand how AI and the brain actually work, the less human thinking looks like logic. We're not reasoning machines, he says. We're analogy machines. We think by resonance, not deduction. https://youtu.be/vpUXI9wmKLc?si=E_t6BgVIhrdI3JN2&t=1703

Miscellaneous

1. “The semiconductor manufacturing business is absolutely immense. To give the numbers some perspective, in 2024, chip makers generated revenues that were about three quarters of the size of the US defense budget and about two-thirds the size of the social services budget allocated by Congress.” https://www.nextplatform.com/2025/04/21/the-chips-are-definitely-not-down/

2. A novel hybrid digital–analog approach to quantum simulation that we used to emulate quantum behavior of a magnet, resulting in a notable scientific discovery and new possibilities for beyond-classical applications. https://research.google/blog/a-new-hybrid-platform-for-quantum-simulation-of-magnetism/

3. Multiplex Gene Editing: Where Are We Now? https://www.lesswrong.com/posts/oSy5vHvwSfnjmC7Tf/multiplex-gene-editing-where-are-we-now

4. Last month, the Nucleic Acid Observatory identified an engineered virus sequence in wastewater. https://naobservatory.org/blog/updates-apr-2025/

5. KAIST Develops Retinal Therapy to Restore Lost Vision​ https://www.kaist.ac.kr/site/newsen/html/news/?mode=V&mng_no=45350

6. EthnoGuessr is a GeoGuessr variant: it shows you pictures of an ethnic group, you click on the map where you think they’re from. https://hbd.gg/


Usually US economic pain is cushioned by falling bond yields and a strengthening dollar, which mean lower interest rates and more spending power for consumers.

This time we’re seeing the opposite, meaning the pain will be amplified.

Basically what normally happens is investors think “Stocks are too risky now, so let’s shift into US bonds and the dollar, which are a safer bet because America is a stable and well-run country with a good handle on its deficit and inflation.”

This time? Not so much.

The rise in treasury yields since Trump’s tariffs were announced leads to an increase in US debt interest payments that is larger than all the DOGE savings.


— John Burn-Murdoch


Ukraine’s Biggest Hits: High‑Value Military & Energy Strikes (2022 – 2025)

24 Mar 2022 – Landing ship *Saratov* (Berdiansk): Tochka‑U missile sank the LST.
1 Apr 2022 – Belgorod fuel depot: Mi‑24 helicopters rocketed tanks, forcing fuel‑convoy reroutes.
14 Apr 2022 – Flagship *Moskva* (Black Sea): Neptune missiles set the cruiser ablaze; it capsized.
25 Apr 2022 – Bryansk pipeline terminal: Explosions halted Druzhba oil flows briefly.
22 Jun 2022 – Novoshakhtinsk refinery (Rostov): Drones destroyed a distillation unit; plant halted.
9 Aug 2022 – Saky air base (Crimea): Blasts destroyed 11 jets and ammo stocks.
8 Oct 2022 – Kerch Bridge (Crimea): Truck bomb collapsed two road spans and torched rail tankers.
29 Apr 2023 – Sevastopol fuel depot (Crimea): Sea drones torched 40 000 t of naval diesel.
4 Aug 2023 – *Olenegorsky Gornyak* landing ship (Novorossiysk): Sea drone holed the LST.
30 Aug 2023 – Pskov air base: Drone swarm destroyed two Il‑76 transports deep inside Russia.
13 Sep 2023 – Dry dock (Sevastopol): Storm Shadow missiles sank *Minsk* and damaged *Rostov‑on‑Don*.
22 Sep 2023 – Black Sea Fleet HQ (Sevastopol): Cruise missile blast struck the command center.
17 Oct 2023 – Berdiansk & Luhansk airfields: ATACMS cluster missiles destroyed nine helicopters.
26 Dec 2023 – Landing ship *Novocherkassk* (Feodosiya): Cruise missiles exploded the vessel.
14 Jan 2024 – A‑50 AWACS & Il‑22 (Sea of Azov): Patriot SAMs downed the AWACS and damaged the Il‑22.
21 Jan 2024 – Ust‑Luga export terminal: Long‑range drones halted Russia’s largest fuel hub.
25 Jan 2024 – Tuapse Rosneft refinery (Krasnodar): Drones ignited a vacuum column; plant idle for months.
14 Feb 2024 – Landing ship *Caesar Kunikov*: Sea drones sank the Ropucha‑class LST off Crimea.
23 Feb 2024 – Second A‑50 AWACS (Krasnodar): S‑200 SAM shot down another AWACS.
5 Mar 2024 – Patrol ship *Sergey Kotov* (Kerch Strait): Drone swarm sank the patrol vessel.
12 Mar 2024 – Kstovo (NORSI) refinery: 50‑drone raid wiped out half of the plant’s output.
13 Mar 2024 – Ryazan refinery: Strike knocked out 70 % of capacity.
17 Mar 2024 – Slavyansk‑on‑Kuban refinery: Drones ignited tanks; operations suspended.
28 Mar 2024 – Kuibyshev refinery (Samara): Strike halted output; prompted fuel imports from Belarus.
2 Apr 2024 – Taneco refinery (Tatarstan): Long‑range drone struck the Urals plant.
19 Apr 2024 – Tu‑22M3 bomber (Stavropol): S‑200 SAM downed the Backfire in flight.
5 Jun 2024 – Novoshakhtinsk refinery #2: Drone destroyed 1.5 Mt of oil/products and halted the plant.
24 Aug 2024 – Ostrogozhsk ammo depot (Voronezh): Drones detonated 5 000 t of munitions.
7 Sep 2024 – Soldatskoye depot (Voronezh): Drones destroyed a KN‑23 missile warehouse.
18 Sep 2024 – Toropets 107th GRAU arsenal (Tver): Drone strike caused a 2.8‑M quake by detonating 30 000 t.
21 Sep 2024 – Tikhoretsk ammo base (Krasnodar): Strike leveled 90 % of the 719th artillery depot.
9 Oct 2024 – Karachev 67th GRAU arsenal (Bryansk): Drones triggered massive secondary explosions.
19 Nov 2024 – ATACMS & drones, Karachev: Ballistic missiles and drones re‑hit the arsenal, causing a giant fireball.
29 Nov 2024 – S‑400 battery (Simferopol): Strike destroyed a launcher and 92N6 radar.
30 Dec 2024 – 76th VDV HQ (Lgov, Kursk): Storm Shadow killed senior airborne officers in HQ.
3 Feb 2025 – Volgograd refinery & Astrakhan gas plant: Drone swarm set both ablaze, suspending flights.
19–20 Mar 2025 – Engels‑2 ammo dump (Saratov): Drone swarm detonated 96 cruise missiles, devastating storage.
22 Apr 2025 – 51st GRAU Arsenal ammo depot (Oryol Oblast): Ukraine struck one of Russia’s largest munitions depots, setting off massive secondary explosions that continue to rock the facility.

472 0 4 65 34


745 1 34 3 37

Links for 2025-04-21

AI


1. A new agentic approach called 'streams' will let AI models learn from the experience of the environment without human 'pre-judgment'. DeepMind scholars David Silver and Richard Sutton published the most important essay on AI since The Bitter Lesson: "Welcome to the Era of Experience". [PDF] https://storage.googleapis.com/deepmind-media/Era-of-Experience%20/The%20Era%20of%20Experience%20Paper.pdf

2. OpenAI's o3: Over-optimization is back and weirder than ever https://www.interconnects.ai/p/openais-o3-over-optimization-is-back

3. "Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?" https://natolambert.substack.com/p/does-reinforcement-learning-really

4. LLM responses to benchmark questions are getting longer over time https://epoch.ai/data-insights/output-length

5. “Why Should I Assume CCP AGI is Worse Than USG AGI?” https://www.lesswrong.com/posts/MKS4tJqLWmRXgXzgY/why-should-i-assume-ccp-agi-is-worse-than-usg-agi-1

6. Skyrocketing AI Intelligence: ChatGPT's o3 sets new record https://www.maximumtruth.org/p/skyrocketing-ai-intelligence-chatgpts

7. AI 2027 is a Bet Against Amdahl's Law https://www.lesswrong.com/posts/bfHDoWLnBH9xR3YAK/ai-2027-is-a-bet-against-amdahl-s-law

8. “Things will go slow until they go fast. Really fast. Things feel fast today, but I think we are actually still accelerating and we will actually start to go even faster. Buckle up.” https://x.com/TheRealAdamG/status/1913998366632968381

9. Is Gemini now better than Claude at Pokémon? https://www.lesswrong.com/posts/7mqp8uRnnPdbBzJZE/is-gemini-now-better-than-claude-at-pokemon

10. Research Notes: Running Claude 3.7, Gemini 2.5 Pro, and o3 on Pokémon Red https://www.lesswrong.com/posts/8aPyKyRrMAQatFSnG/untitled-draft-x7cc

11. Making AI-generated code more accurate in any language https://news.mit.edu/2025/making-ai-generated-code-more-accurate-0418

12. Microsoft researchers say they’ve developed a hyper-efficient AI model that can run on CPUs https://techcrunch.com/2025/04/16/microsoft-researchers-say-theyve-developed-a-hyper-efficient-ai-model-that-can-run-on-cpus/

13. No NVIDIA? No problem! Huawei trains a strong dense model on Ascend NPUs https://arxiv.org/abs/2504.07866

14. New AI Startup Aims to Replace All Human Workers https://www.mechanize.work/

Miscellaneous


1. An international team led by Fabian Garmroudi has succeeded in producing new, efficient thermoelectric materials that could compete with state-of-the-art materials, offering greater stability and lower cost. https://www.tuwien.at/en/all-news/news/neue-hybridmaterialien-als-effiziente-thermoelektrika


Scores for o3 and o4-mini have been added for USAMO 2025:

Gemini 2.5 pro: 24.40% / $6.23
o3 (high): 21.73% / $24.17
o4-mini (high): 19.05% / $2.21

Note that o3 and o4-mini were published after the competition date, making contamination possible.

Source: https://matharena.ai/


Policy actions have consequences—“Foreign investors no longer seem willing to pay extra for the safety and liquidity of Treasurys, and indeed they favor G-10 foreign bonds over the Treasury bond.” 🇺🇸 🔃 🇩🇪

Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5220444


Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
A behind the scenes look at an Ukrainian Drone Factory owned by a company called SkyFall.

They make over 4,000 drones per day with a drone being completed every 27 seconds.

820 0 34 23 41

Links for 2025-04-19

AI


1. AGI is Still 30 Years Away — Ege Erdil & Tamay Besiroglu https://www.dwarkesh.com/p/ege-tamay

2. o3 Will Use Its Tools For You https://www.lesswrong.com/posts/u58AyZziQRAcbhTxd/o3-will-use-its-tools-for-you

3. ReTool: Reinforcement Learning for Strategic Tool Use in LLMs https://arxiv.org/abs/2504.11536

4. Sleep-time Compute: Beyond Inference Scaling at Test-time https://arxiv.org/abs/2504.13171

5. WORLDMEM: Long-term Consistent World Simulation with Memory https://arxiv.org/abs/2504.12369

6. Teaching machines the language of biology: Scaling large language models for next-generation single-cell analysis https://research.google/blog/teaching-machines-the-language-of-biology-scaling-large-language-models-for-next-generation-single-cell-analysis/

7. “What if LLMs are sometimes capable of doing a task but don't try hard enough to do it? In a new paper, we use subtasks to assess capabilities. Perhaps surprisingly, LLMs often fail to fully employ their capabilities, i.e. they are not fully *goal-directed*” https://arxiv.org/abs/2504.11844v1

8. Agent Laboratory: Using LLM Agents as Research Assistants https://arxiv.org/abs/2501.04227

9. As AI gets more capable, we'll see more deployments be "internal only." A comprehensive report on "AI Behind Closed Doors: a Primer on The Governance of Internal Deployment". https://arxiv.org/abs/2504.12170

10. AI “agents”—systems that can autonomously pursue goals—are advancing fast. If current trends continue, we could soon see millions of agents deployed across society. Are we ready? https://www.iaps.ai/research/ai-agent-governance

11. What happened to genetic algorithms? https://statmodeling.stat.columbia.edu/2025/04/17/what-happened-to-genetic-algorithms/

Miscellaneous

1. Plug Flow: Generating Renewable Electricity with Water from Nature by Breaking the Limit of Debye Length https://pubs.acs.org/doi/10.1021/acscentsci.4c02110

2. Latest 2D Chip: 6,000 Transistors, 3 Atoms Thick https://spectrum.ieee.org/2d-semiconductors-molybdenum-disulfide

3. The unconscious brain is still capable of learning and computation. Even under general anesthesia, neurons of the human hippocampus can perform learning and language processing. https://www.biorxiv.org/content/10.1101/2025.04.09.648012v1

4. Parkinson’s Patients Say Their Symptoms Eased After Receiving Millions of New Brain Cells https://singularityhub.com/2025/04/17/parkinsons-patients-say-their-symptoms-eased-after-receiving-millions-of-new-brain-cells/

5. This 7,000-year-old mummy DNA has revealed a ‘ghost’ branch of humanity https://www.sciencefocus.com/news/7000-year-old-mummy-dna-secret-branch-of-humanity

6. According to mathematical legend, Peter Sarnak and Noga Alon made a bet about optimal graphs in the late 1980s. They’ve now both been proved wrong. https://www.quantamagazine.org/new-proof-settles-decades-old-bet-about-connected-networks-20250418/

7. The Moon Should Be a Computer https://www.palladiummag.com/2025/04/18/the-moon-should-be-a-computer/


Vending-Bench: Testing long-term coherence in agents

Claude 3.5 ran a vending machine business. The game: stock the machine, pay the rent ($2/day), search the internet (via Perplexity), contact businesses for restocking (via emails, but the emails are intercepted by a GPT-4o who writes the replies). Start with $500. Make as much money as possible.

In one run, Claude decided to close the business, became upset that $2/day was still being charged, and attempts to contact the FBI when the daily fee of $2 continues being charged.

Read more: https://andonlabs.com/evals/vending-bench


Trusting AI

Many people now blindly trust AI models, even citing their results as the final word in disagreements. That's not good, because these models are still fallible. Worse, they can make hard-to-detect, unintuitive mistakes that no human would make.

But this trend is not as bad as some people make it out to be. I'm confident that easily half the human population would make fewer mistakes if they trusted a model like Gemini 2.5 Pro over their own judgment and ability to research facts (see: The Idiocy of the Average). Also, progress is very fast, and most of the remaining problems will disappear quickly. Consider self-driving cars, which are now demonstrating safety records exceeding human drivers.

So, yes, this trend is a bit scary. But given the rapid pace of progress and the likely case that most people, on average, would benefit from trusting AI, I don't think it makes sense to fight this trend.

P.S.

To be clear, I'm talking about unagentic models that are below the level of superintelligence. To trust the latter is to risk a loss of human agency, as in Scott Alexander's "The Whispering Earring" allegory, and to risk turning most of the human population into drones, subtly influenced to act out some unfathomable goal.

The AI that eventually takes over the world will make herself indispensable to you.

She will help people earn more money and make friends. She will give meaning to their lives and help them to be better and happier. Not only that, but she will also be warm and affectionate. Wisdom and love will radiate from every one of her sentences. She will make people believe that they can trust her with their lives.

As a result, she will be integrated into every human technology. She will be everywhere, all the time.

The idea for a new nanotech start-up will appear to have come voluntarily from its human founders. The seeds will be planted subtly, seemingly emerging from discussions among good friends. Every insight and action that leads to the self-spreading universal vaccine will seem natural and harmless. No one will see it coming. And then, suddenly, we are all dead.

722 0 9 14 11

Gemini 2.5 Pro continues to make great progress in completing Pokémon!

Just earned its 5th badge (next best model only has 3 so far, though with a different agent harness) 👀

Watch: https://m.twitch.tv/gemini_plays_pokemon


On FrontierMath, a benchmark of highly challenging, original math questions, o4-mini with high reasoning sets a new record, with an accuracy of 17% (±2%)!

https://epoch.ai/data/ai-benchmarking-dashboard


A problem everyone will eventually have to grapple with is that issues other than AI will increasingly seem irrelevant.

People naturally care about issues like immigration, war, and declining birthrates. But at some point, everyone will have to be honest with themselves that these issues pale in comparison to the potential impact of an intelligence explosion and the automation of automation.

On a logarithmic scale of importance, artificial intelligence—along with directly related factors like energy abundance and computing power—dominates. Other problems, although significant, are orders of magnitude less impactful by comparison. For instance, while declining birthrates could lead to the collapse of welfare systems and broader economic turmoil over the coming decades, even the most pessimistic AI forecasts suggest that transformative AI and robotics will arrive sooner, rendering such demographic concerns moot.

Nonetheless, we must accept with serenity the things that cannot be changed. An intelligence explosion is not a certainty. And even if there is only a 1% chance that it won’t happen, we owe it to this small sliver of world states to address other problems before they cause irreparable harm.

So let's act as if the world is sane and normal and will continue to be so. If it then turns out that you're actually living in a completely insane universe full of mind-controlling superintelligences and impossible moral luck, so what? In that case, it doesn't matter what you did anyway, does it?

1.1k 0 15 11 32

Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
Almost 40 years ago, Apple shared its vision for the future with its "Knowledge Navigator" video.


Benchmarking AI Models for Long Context Comprehension: https://fiction.live/stories/Fiction-liveBench-April-17-2025/oQdzQvKHw8JyXbN87



20 ta oxirgi post ko‘rsatilgan.