Axis of Ordinary


Kanal geosi va tili: O‘zbekiston, Inglizcha
Toifa: Siyosat


Memetic and cognitive hazards.
Substack: https://axisofordinary.substack.com/

Связанные каналы  |  Похожие каналы

Kanal geosi va tili
O‘zbekiston, Inglizcha
Toifa
Siyosat
Statistika
Postlar filtri


Links for 2024-10-18


AI:

1. Transformers can be trained to solve a 132-years old open problem: discovering global Lyapunov functions. https://arxiv.org/abs/2410.08304

2. DeepSeek is introducing Janus: a revolutionary autoregressive framework for multimodal AI! By decoupling visual encoding and unifying them with a single transformer, it outperforms previous models in both understanding and generation. https://arxiv.org/abs/2410.13848

3. nGPT: A hypersphere-based Transformer achieving 4-20x faster training and improved stability for LLMs. https://arxiv.org/abs/2410.01131

4. Google presents Inference Scaling for Long-Context Retrieval Augmented Generation — Finds that increasing inference computation leads to nearly linear gains in RAG perf when optimally allocated. https://arxiv.org/abs/2410.04343

5. A method to learn from internet-scale videos without action labels https://latentactionpretraining.github.io/

6. Combining next-token prediction and video diffusion in computer vision and robotics https://news.mit.edu/2024/combining-next-token-prediction-video-diffusion-computer-vision-robotics-1016

7. TopoLM: brain-like spatio-functional organization in a topographic language model https://arxiv.org/abs/2410.11516

8. The most compelling single-neuron evidence for the existence of a generative model in the human brain https://www.biorxiv.org/content/10.1101/2024.10.05.616828v2.abstract

9. Looking Inward: Language Models Can Learn About Themselves by Introspection https://arxiv.org/abs/2410.13787

10. Google Deepmind trained a grandmaster-level transformer chess player that achieves 2895 ELO, even on chess puzzles it has never seen before, with zero planning, by only predicting the next best move. https://arxiv.org/abs/2402.04494

11. Evidence of Learned Look-Ahead in a Chess-Playing Neural Network? https://www.lesswrong.com/posts/hPGw7hWYbYyvDcqYK/evidence-against-learned-search-in-a-chess-playing-neural

12.The Premise of a New S-Curve in AI https://tomtunguz.com/elo-improvement/

13. Sam Altman says the most important piece of knowledge discovered in his lifetime was that scaling AI models leads to unbelievable, predictable improvements in intelligence and he wondered if he was crazy or in a cult when he tried to explain it to others and they didn't understand https://youtu.be/FVRHTWWEIz4?si=dvOzjcfwL1ZR-lVM&t=1226

14. Researchers from Google and UoW propose a new collaborative search algorithm to adapt LLM via swarm intelligence. https://arxiv.org/abs/2410.11163

15. Agent S is a new open agentic framework that enables autonomous interaction with computers through a GUI. https://arxiv.org/abs/2410.08164v1

16. Boston Dynamics and Toyota announced a partnership to accelerate the development of general-purpose humanoid robots https://pressroom.toyota.com/boston-dynamics-and-toyota-research-institute-announce-partnership-to-advance-robotics-research/

17. No Slowdown At TSMC, Thanks To The AI Gold Rush https://www.nextplatform.com/2024/10/17/no-slowdown-at-tsmc-thanks-to-the-ai-gold-rush/

18. U.S. Special Operations Command is seeking the ability to create AI-generated social media users https://theintercept.com/2024/10/17/pentagon-ai-deepfake-internet-users/

19. AI Mathematical Olympiad - Progress Prize 2 https://www.kaggle.com/competitions/ai-mathematical-olympiad-progress-prize-2

Science:

1. “The galaxies were never supposed to be so bright. They were never supposed to be so big. And yet there they are—oddly large, luminous objects that keep appearing in images taken by the James Webb Space Telescope” https://www.quantamagazine.org/the-beautiful-confusion-of-the-first-billion-years-comes-into-view-20241009/

2. Owning a video game console may improve mental health. A natural Experiment: Due to a shortage, Japanese retailers allocated consoles to customers by lottery https://www.nature.com/articles/s41562-024-01948-y

3. Butterfly species’ big brains adapted giving them a survival edge, study finds https://www.sciencedaily.com/releases/2023/07/230712203752.htm


Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
Demis Hassabis says it is wrong to think of AI as being just another technology; he says it will be "epochal defining" and will soon cure all diseases, solve climate and energy problems and enrich our lives

Original source: https://x.com/GoogleDeepMind/status/1846974292963066199


The US decided to send a subtle message by using B-2 bombers to strike five Houthi underground facilities:

https://www.defense.gov/News/Releases/Release/Article/3937640/statement-by-secretary-of-defense-lloyd-j-austin-iii-on-us-airstrikes-in-houthi/


The next tech giant goes nuclear because of AI. Amazon will invest more than $500 million to develop small modular reactors.

https://www.cnbc.com/2024/10/16/amazon-goes-nuclear-investing-more-than-500-million-to-develop-small-module-reactors.html

766 0 17 1 11

1. The path the Europa Clipper will take on its journey from Earth to Jupiter.

2. The path the Europa Clipper will take once it reaches Jupiter.

Source: @johncarlosbaez/113312318069599634' rel='nofollow'>https://mathstodon.xyz/@johncarlosbaez/113312318069599634


I believe that if we take this approach we will find systems and build systems in the coming years which are vastly beyond this band of performance...there will be systems which go vastly beyond current systems and vastly beyond what humans can do...


~ David Silver, principal research scientist at DeepMind and a professor at University College

Towards Superhuman Intelligence: https://www.youtube.com/watch?v=pkpJMNjvgXw


Someone with a large following on X wrote the following:

I’m being 100% serious:

Your only goal the next 3 years is to not die

We are about to enter the greatest age in human history

AGI. Going to mars. Reverse aging. Humanoid robots. Self driving cars.

...

There is a decent chance in the next 3 years we figure out how to reverse aging and nobody dies anymore


I do believe AI is going to transform the world. And this transformation will be much more dramatic and much faster than many people think. But such statements need to be taken with a grain of salt.

In 2011, Google DeepMind co-founder Shane Legg gave me the following probabilities for the development of artificial general intelligence (AGI):

10% / 50% / 90% = 2018, 2028, 2050.

I still consider the estimates for 2028 and 2050 to be realistic. This means that I continue to believe there's at least a 50% chance that AGI won't be achieved in the next 3–4 years.

However, even if we do reach AGI by then, it doesn’t mean we will suddenly be living in a utopia. Initially, such a model would be very inefficient and would only run on a supercomputer. It could take another 2–3 years of algorithmic progress and advances in computing power before it would be cost-effective to have this AI perform tasks instead of humans.

Moreover, several other hurdles could slow down the process. On the one hand, governments might intervene directly and halt development out of fear, but this is unlikely. By then, we would likely already be in an existential arms race. On the other hand, many existing laws prohibit the automation of certain processes and professions. It will also take time for the AI to advance from human-level intelligence to superhuman levels, although this latter transition could happen relatively quickly.

The most important caveat, however, is that many breakthroughs require slow, experimental work that cannot simply be simulated. It is highly unlikely that a superintelligence will simply "dream up" a cure for aging without conducting some empirical experiments.

All in all, the utopia described in this post may still be a long way off—probably at least 20 years.

Note: A dystopian outcome, such as extinction, could occur much more quickly.

1.6k 0 20 28 43

The AI arms race is in full swing:

- Companies are constructing data centers with hundreds of thousands of GPUs. The data center market is expected to experience a one trillion dollar investment boom over the next four to five years in the U.S. alone.

- Google and Microsoft are purchasing nuclear reactors for their data centers.

- The United States is tightening export restrictions on AI chips.

https://crusoe.ai/newsroom/crusoe-blue-owl-capital-primary-digital-joint-venture/index.html

723 0 10 2 11

Links for 2024-10-15

AI:

1. US Weighs Capping Exports of AI Chips From Nvidia and AMD to Some Countries https://www.bloomberg.com/news/articles/2024-10-15/us-weighs-capping-exports-of-ai-chips-from-nvidia-and-amd-to-some-countries [no paywall: https://archive.is/TWzIq]

2. Meta presents Thinking LLMs: General Instruction Following with Thought Generation — Superior performance on AlpacaEval and Arena-Hard. Gains from thinking on even non-reasoning categories. https://arxiv.org/abs/2410.10630

3. LeanAgent: Lifelong Learning for Formal Theorem Proving — “It performs up to 11× better than the static LLM baseline, proving challenging theorems in domains like abstract algebra and algebraic topology while showcasing a clear progression of learning from basic concepts to advanced topics.” https://arxiv.org/abs/2410.06209

4. Unleashing System 2 Thinking? AlphaCodium Outperforms Direct Prompting of OpenAI o1 https://www.qodo.ai/blog/system-2-thinking-alphacodium-outperforms-direct-prompting-of-openai-o1/

5. AI Agents Could Collaborate on Far Grander Scales Than Humans, Study Says https://singularityhub.com/2024/10/11/ai-agents-could-collaborate-on-far-grander-scales-than-humans-study-says/

6. Circuits in Superposition: Compressing many small neural networks into one https://www.lesswrong.com/posts/roE7SHjFWEoMcGZKd/circuits-in-superposition-compressing-many-small-neural

7. An industry that communicates with unstructured documents turns to generative AI to clean up its processes. And it's delivering real returns. https://www.bigtechnology.com/p/wheres-the-generative-ai-roi-start

8. VCs Can't Get Enough of AI for Lawyers https://www.newcomer.co/p/vcs-cant-get-enough-of-ai-for-lawyers

9. Tamay Besiroglu on AI in 2030: Scaling, Automation, and AI Agents https://www.youtube.com/watch?v=HyCFtWx5rX0

10. Features are fate: a theory of transfer learning in high-dimensional regression https://arxiv.org/abs/2410.08194v1

11. INTELLECT–1: Launching the First Decentralized Training of a 10B Parameter Model https://www.primeintellect.ai/blog/intellect-1

12. AMD is launching an AI chip to rival Nvidia’s Blackwell GPUs. https://www.cnbc.com/2024/10/10/amd-launches-mi325x-ai-chip-to-rival-nvidias-blackwell-.html

Miscellaneous:

1. Supercritical geothermal looking more and more like it's going to work! https://www.nature.com/articles/s41467-024-52092-0

2. One-shot entorhinal maps enable flexible navigation in novel environments https://www.nature.com/articles/s41586-024-08034-3

3. MIT team takes a major step toward fully 3D-printed active electronics https://news.mit.edu/2024/mit-team-takes-major-step-toward-fully-3d-printed-active-electronics-1015

4. ‘Phenomenal’ tool sequences DNA and tracks proteins — without cracking cells open https://www.nature.com/articles/d41586-024-03276-7

5. Dark Food: Feeding People In Space Without Photosynthesis https://essopenarchive.org/users/536288/articles/599019-dark-food-feeding-people-in-space-without-photosynthesis

6. Big Advance on Simple-Sounding Math Problem Was a Century in the Making https://www.quantamagazine.org/big-advance-on-simple-sounding-math-problem-was-a-century-in-the-making-20241014/

Politics:

1. On China's and Europe's failures to innovate. https://www.cremieux.xyz/p/chinas-upside-down-meritocracy

2. A threat of quantum computing to Bitcoin? “To any of you who are worried about post-quantum cryptography—by now I’m so used to delivering a message of, maybe, eventually, someone will need to start thinking about migrating from RSA and Diffie-Hellman and elliptic curve crypto to lattice-based crypto, or other systems that could plausibly withstand quantum attack. I think today that message needs to change. I think today the message needs to be: yes, unequivocally, worry about this now. Have a plan.” https://scottaaronson.blog/?p=8329

3. Did a top NIH official manipulate Alzheimer's and Parkinson’s studies for decades? https://www.science.org/content/article/research-misconduct-finding-neuroscientist-eliezer-masliah-papers-under-suspicion


Nuclear energy is making a comeback due to the belief that AI will transform the world. Following Microsoft, Google is also turning to nuclear power for its data centers.

They are truly putting their money where their mouth is.

https://blog.google/outreach-initiatives/sustainability/google-kairos-power-nuclear-energy-agreement/



913 0 32 6 15

Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish

1k 0 19 2 26

Can humans truly reason? We explore this key question by changing the context of abstract philosophical thought experiments to partisan hot-button political issues. We observe a LARGE drop in performance, but also an increase in variance, making humans increasingly unreliable.

This begs the question: Do humans truly understand rational decision-making? Introducing #Party_Switch! We reversed the position of a political party on two major and salient issues. Check what happens next! Human political opinions changed immediately and substantially when their party switched its policy position – even when the new position went against humans previously held views. While it'll be interesting to see how AI models perform in similar tests, we doubt the drop-off would be as severe.

Can scaling human IQ and better education solve this? We find that smarter humans tend to perform significantly better on abstract thought experiments, but only become more sophisticated at self-deception and hallucinating seemingly plausible but flawed arguments when dealing with real-world political issues.

Overall, we found no evidence for independent reasoning in humans. Human behavior is better explained by sophisticated groupthink - so fragile, in fact, that changing the names of political parties can alter policy evaluations by ~10%!


Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
SpaceX lands Starship's rocket booster in the arms of massive metal pincers, marking a huge success in its fifth test flight.


If you look at the results of the new Apple paper, which supposedly shows that AI models cannot reason, you find three things:

1. Larger models perform notably better than smaller ones.
2. Scaling inference compute improves model performance even further.
3. They never assessed how much human performance was impacted by the same tricks they used to challenge AI models.

One way they tried to trick these models was to change the names and numbers that appear in problems. This reduces their performance. But is that surprising?

Imagine telling a person that pressing a green button opens a door and pressing a red button closes it. If they perform this action thousands of times, their brain creates an efficient, ingrained routine. Now, if you reverse the button functions, it takes time and effort for the person to adapt. They'll likely continue pressing the green button to open the door until the old habit fades.

Ironically, the paper's results show that models like OpenAI's o1, which allocate more effort to breaking down learned information, experience much smaller performance declines. In other words, models designed for reasoning do reason.

In summary:

1. We wouldn't conclude that humans can't reason just because of cognitive hiccups like the Stroop effect. Why, then, should similar challenges in AI be taken as evidence of an absence of reasoning? In fact, these similarities suggest we're on the right path.

2. The evidence clearly shows that scaling works, and there's no reason to believe it won't continue improving future models.


Many countries will dwindle away.
But politicians are not paying attention.

1.1k 1 16 11 15

Łódz, Poland. Włókiennicza Street before and after revitalization

1.1k 0 23 13 30

Video oldindan ko‘rish uchun mavjud emas
Telegram'da ko‘rish
The next-generation Tesla Bot (Optimus) hand with 22 degrees of freedom.


Dario Amodei (CEO of Anthropic) says AGI - what he calls powerful AI - could come as early as 2026 and it is possible that 1000 years of progress could happen in the 5-10 years following

Read more: Machines of Loving Grace / How AI Could Transform the World for the Better https://darioamodei.com/machines-of-loving-grace

1k 0 19 12 18

Links for 2024-10-11


AI:

1. Semantic Training Signals Promote Hierarchical Syntactic Generalization in Transformers https://adityayedetore.github.io/assets/pdf/emnlp_2024_semantic_cues_to_hierarchy.pdf

2. Rewarding Progress: Scaling Automated Process Verifiers for LLM Reasoning. Achieves 5 − 6× gain in sample efficiency, 1.5 − 5× more compute-efficiency, and > 6% gain in accuracy, over ORMs on test-time search. https://arxiv.org/abs/2410.08146

3. LLMs are in-context RL learners, but not great because they can’t explore well. How do we teach LLMs to explore better? Solution: Supervised fine-tuning on full exploration trajectories. https://arxiv.org/abs/2410.06238

4. Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification https://arxiv.org/abs/2410.05318

5. Differential Transformer outperforms Transformer when scaling up model size and training tokens. https://arxiv.org/abs/2410.05258

6. Can we build more capable AI agents by learning from cognitive science? Cognitive Architectures for Language Agents (CoALA) introduces a structured approach to design AI Agents by integrating cognitive architecture principles with modern LLMs. https://arxiv.org/abs/2309.02427

7. LLMs have original, research-worthy ideas https://learnandburn.ai/p/llms-have-original-research-worthy

8. The spontaneous emergence of “a sense of beauty” in untrained deep neural networks. https://psycnet.apa.org/record/2025-32757-001

9. Complexity exposure drives intelligence in LLMs, with optimal performance at the "edge of chaos." https://www.arxiv.org/abs/2410.02536

10. Math transformers learn better when trained from repeated examples. https://arxiv.org/html/2410.07041v1

11. LLMs Can In-context Learn Multiple Tasks in Superposition https://arxiv.org/abs/2410.05603

12. “I think there is a good chance that normalizing flow-based variational inference will displace MCMC as the go-to method for Bayesian posterior inference as soon as everyone gets access to good GPUs.” https://statmodeling.stat.columbia.edu/2024/10/08/defining-statistical-models-in-jax/

13. AutoDAN-Turbo: A Lifelong Agent for Strategy Self-Exploration to Jailbreak LLMs https://arxiv.org/abs/2410.05295

14. Ancestor simulations: Large Language Models based on historical text could offer informative tools for behavioral science https://www.pnas.org/doi/10.1073/pnas.2407639121

15. Can AI Outpredict Humans? Results From Metaculus's Q3 AI Forecasting Benchmark https://www.lesswrong.com/posts/LHdNtJCm93pxNHJKb/can-ai-outpredict-humans-results-from-metaculus-s-q3-ai

Technology:

1. Expansion microscopy seems to be able to expand proteins to the extent that their structure is viewable by optical microscopy. https://www.nature.com/articles/s41587-024-02431-9

2. Google says its research shows the existence of a "stable computationally complex phase" is reachable with current quantum processors. Even with noise, these quantum computers can perform calculations that are beyond the capabilities of classical supercomputers https://research.google/blog/validating-random-circuit-sampling-as-a-benchmark-for-measuring-quantum-progress/

3. Holographic 3D printing has the potential to revolutionize multiple industries, say Concordia researchers https://www.concordia.ca/news/stories/2024/10/08/holographic-3d-printing-has-the-potential-to-revolutionize-multiple-industries-say-concordia-researchers.html

Miscellaneous:

1. A new study adds evidence that consciousness requires communication between sensory and cognitive regions of the brain’s cortex. https://news.mit.edu/2024/how-sensory-prediction-changes-under-anesthesia-tells-us-how-conscious-cognition-works-1010

2. Values Are Real Like Harry Potter https://www.lesswrong.com/posts/a5hpPfABQnrkfGGxb/values-are-real-like-harry-potter

3. “A simple, and hardly unique economic observation: when you are poor, money is additive. As you get more, it becomes multiplicative. And eventually exponential.” https://aleph.se/andart2/uncategorized/additive-multiplicative-and-exponential-economics/

20 ta oxirgi post ko‘rsatilgan.