Axis of Ordinary


Гео и язык канала: Узбекистан, Английский
Категория: Политика


Memetic and cognitive hazards.
Substack: https://axisofordinary.substack.com/

Связанные каналы  |  Похожие каналы

Гео и язык канала
Узбекистан, Английский
Категория
Политика
Статистика
Фильтр публикаций


Links for 2024-06-25

AI:

1. Neural Network Potentials for Enabling Advanced Small-Molecule Drug Discovery and Generative Design https://www.liebertpub.com/doi/10.1089/genbio.2024.0011

2. “ESM3 is a generative language model for programming biology. In experiments, we found ESM3 can simulate 500M years of evolution to generate new fluorescent proteins.” https://www.evolutionaryscale.ai/blog/esm3-release

3. RL on Incorrect Synthetic Data Scales the Efficiency of LLM Math Reasoning by Eight-Fold https://arxiv.org/abs/2406.14532

4. LongRAG: Enhancing Retrieval-Augmented Generation with Long-context LLMs https://arxiv.org/abs/2406.15319

5. Microsoft unveiled Florence-2, a new generalist AI vision model. It uses prompt-based instructions and excels in tasks like captioning, object detection, OCR, and segmentation. Most notably, it's 100x smaller than similar models and open-sourced. https://huggingface.co/microsoft/Florence-2-large

6. Microsoft AI CEO Mustafa Suleyman says it won't be until GPT-6 in 2 years time that AI models will be able to follow instructions and take consistent action https://youtu.be/cNzRviY4Ei8?si=2ELY9k7ozbs7SQc4&t=1331

7. Frontier AI systems could pose increasing risks to public safety and security. But what level of risk is acceptable? https://arxiv.org/abs/2406.14713

8. LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible. https://www.lesswrong.com/posts/k38sJNLk7YbJA72ST/llm-generality-is-a-timeline-crux

9. “In 2021, researchers were asked how smart AI would be in 2022. They predicted 12% pass-rate on the MATH dataset; AI outperformed significantly, scoring 50%. Now it's at 90%.” https://x.com/Jake_Schwartz/status/1804974011434344631

Neuroscience:

1. New Computational Model of Real Neurons Could Lead to Better AI — “The new model developed by researchers at the Flatiron Institute proposes that biological neurons have more control over their surroundings than previously thought, something that could be replicated in the artificial neural networks used in machine learning.” https://www.simonsfoundation.org/2024/06/24/new-computational-model-of-real-neurons-could-lead-to-better-ai/

2. Brain-Like Language Processing via a Shallow Untrained Multihead Attention Network https://arxiv.org/abs/2406.15109

3. Reconstructing seen movies from mouse visual cortex🐭🧠https://www.biorxiv.org/content/10.1101/2024.06.19.599691v1 (video: https://www.youtube.com/watch?v=TA6Oi5NfuMs)

4. A novel treatment has been proven to effectively treat cognitive decline in mice with Alzheimer’s disease. https://www.oist.jp/news-center/news/2024/6/20/damage-synapses-caused-alzheimers-disease-reversed

5. The Path Towards Mammalian Whole-Brain Circuit Mapping https://www.youtube.com/watch?v=QZhvsJw7P_Q

6. Neurotechnology: Past, Present, and Future https://www.youtube.com/watch?v=sNP4_3cbLxA

Technology:

1. Northrop Grumman's solution to drones: a MACE sensor tower, a lightweight 30mm XM914 chaingun and XM1211 proximity-fuzed HE rounds, all working together off the back of two trucks. https://www.youtube.com/watch?v=rr7ym1zkda8

2. Terahertz Waves Supercharged: A Breakthrough With Magnetic Materials https://www.wpi-aimr.tohoku.ac.jp/en/achievements/press/2024/20240610_001808.html

Miscellaneous:

1. NASA, Global Astronomers Await Rare Nova Explosion – so bright it will be visible on Earth with the naked eye – is poised to occur. https://www.nasa.gov/centers-and-facilities/marshall/nasa-global-astronomers-await-rare-nova-explosion/

2. Mistakes people make when thinking about units https://www.lesswrong.com/posts/5vfSNLb92eyXKkQax/mistakes-people-make-when-thinking-about-units


1. Unitree Robotics @unitreerobotics' rel='nofollow'>https://www.youtube.com/@unitreerobotics

2. https://humanoid-ai.github.io/

Compare this to the 2015 DARPA Robotics Challenge to get a sense of how amazing this progress is: https://www.youtube.com/watch?v=xb93Z0QItVI


Links for 2024-06-23

AI:

1. Grokfast: Accelerated Grokking by Amplifying Slow Gradients — “This analysis allows us to accelerate the grokking phenomenon more than ×50 with only a few lines of code” https://arxiv.org/abs/2405.20233

2. Whiteboard-of-Thought: Thinking Step-by-Step Across Modalities https://whiteboard.cs.columbia.edu/

3. Self-play with Execution Feedback: Improving Instruction-following Capabilities of Large Language Models https://arxiv.org/abs/2406.13542

4. VoCo-LLaMA: Towards Vision Compression with Large Language Models — achieves minimal performance loss with a compression ratio of 576×, resulting in up to 94.8% fewer FLOPs and 69.6% acceleration in inference time. https://arxiv.org/abs/2406.12275

5. Engineering Biology: The Unreasonable Effectiveness of Design / Build / Test https://www.digitalisventures.com/blog/engineering-biology-the-unreasonable-effectiveness-of-design-build-test

6. “These 94 lines of code are everything that is needed to train a neural network. Everything else is just efficiency.” https://x.com/karpathy/status/1803963383018066272

7. Otherness and control in the age of AGI — how agents with different values should relate to each other and the ethics of seeking and sharing power. https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi

Neuropsychotechnology:

1. “Time Cells” in the Brain are Critical for Complex Learning, Study Shows https://healthcare.utah.edu/newsroom/news/2024/06/time-cells-brain-are-critical-complex-learning-study-shows

2. Towards a Less Bullshit Model of Semantics https://www.lesswrong.com/posts/RrQftNoRHd5ya54cb/towards-a-less-bullshit-model-of-semantics

3. Language is primarily a tool for communication rather than thought https://news.ycombinator.com/item?id=40756176

Psychology:

1. "In an incentivized experiment … we ask whether participants would like to change their predictions if the (vast) majority of experts (or peers) made the other choice. Very few participants are willing to change their predictions" [PDF] https://www.econ.cam.ac.uk/research-files/repec/cam/pdf/cwpe2423.pdf

2. "larger groups lie more, all-male groups stand out in their proclivity to lie, already the first female in a group causes an honesty shift" [PDF] https://docs.iza.org/dp16954.pdf

3. “[C]itizens' policy opinions changed immediately and substantially when their party switched its policy position – even when the new position went against citizens' previously held views.” https://onlinelibrary.wiley.com/doi/abs/10.1111/ajps.12550

Miscellaneous:

1. The cost to fix the Y2K problem in the US alone vary from $100 billion to $200 billion. https://www.sciencedirect.com/science/article/abs/pii/S0160791X00000154

2. A black hole of inexplicable mass. The black hole already weighed a billion solar masses when the universe was still in its infancy. https://www.mpg.de/22005299/massive-black-holes-earliest-quasars




Видео недоступно для предпросмотра
Смотреть в Telegram
“The Rolls-Royce Micro-Reactor will provide reliable, autonomous energy solutions to multiple markets. Providing zero-emission power, our advanced nuclear technologies support many global Net Zero targets, solving energy dependence across many industries.”

https://www.youtube.com/watch?v=EiP4wocRp3g


Links for 2024-06-21

AI:

1. OpenPipe Mixture of Agents: Outperform GPT-4 at 1/25th the Cost https://openpipe.ai/blog/mixture-of-agents

2. Tree Search for Language Model Agents — It is the first tree search algorithm for LLM agents that shows effectiveness on realistic and complex web environments. https://jykoh.com/search-agents

3. DigiRL: Training In-The-Wild Device-Control Agents with Autonomous Reinforcement Learning https://digirl-agent.github.io/

4. Major research into ‘hallucinating’ generative models advances reliability of artificial intelligence. Detecting hallucinations in large language models using semantic entropy. https://www.ox.ac.uk/news/2024-06-20-major-research-hallucinating-generative-models-advances-reliability-artificial

5. A general graph neural network based implicit solvation model for organic molecules in water https://pubs.rsc.org/en/Content/ArticleLanding/2024/SC/D4SC02432J

6. Andrew Ng on using OpenDevin, an open-source agentic coding framework, to design practice problems for his daughter. https://x.com/AndrewYNg/status/1803835964604977663

7. Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data — “We finetune an LLM on just (x,y) pairs from an unknown function f. Remarkably, the LLM can: a) Define f in code b) Invert f c) Compose f —without in-context examples or chain-of-thought. So reasoning occurs non-transparently in weights/activations!” https://arxiv.org/abs/2406.14546

8. “Transformers really do learn world models, but they’re thickly fouled by incoherence, and this makes them behave unreliably, especially out-of-distribution.” — davidad https://arxiv.org/abs/2406.03689

9. Mira Murati: GPT-3 was toddler-level, GPT-4 was a smart high schooler and the next-gen, to be released in a year and a half, will be PhD-level https://youtu.be/yUoj9B8OpR8?si=d2XrrGPy56KPy8qL&t=836

Miscellaneous:

1. New Study Suggests Universal Laws Govern Brain Structure From Mice to Men https://news.northwestern.edu/stories/2024/06/brains-structure-hangs-in-a-delicate-balance/

2. 82,000-year-old shell beads from North Africa and implications for the origins of modern human behavior https://www.pnas.org/doi/abs/10.1073/pnas.0703877104


Anthropic releases Claude 3.5 Sonnet.

Sonnet outperforms competitor models on key evaluations, at twice the speed of Claude 3 Opus and one-fifth the cost: https://www.anthropic.com/news/claude-3-5-sonnet

Try it for free: https://claude.ai/

959 0 17 1 14

Links for 2024-06-19

AI:

1. Jared Kaplan (Anthropic's Chief Scientist): Human level AI by 2030? https://www.youtube.com/watch?v=4a5lzYreMME

2. Ilya Sutskever: I am starting a new company. Superintelligence is within reach. We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. https://www.lesswrong.com/posts/oeZ93QTv39TeB94Wt/ilya-sutskever-created-a-new-agi-startup

3. Transcendence: Generative Models Can Outperform The Experts That Train Them https://arxiv.org/abs/2406.11741v1

4. Learning Iterative Reasoning through Energy Diffusion https://energy-based-model.github.io/ired/

5. PlanRAG: A Plan-then-Retrieval Augmented Generation for Generative Large Language Models as Decision Makers https://arxiv.org/abs/2406.12430

6. LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning https://llarva24.github.io/

7. “New data shows that the Waymo Driver continues to make roads safer. Over 14.8M rider-only miles driven through the end of March, it was up to 3.5x better in avoiding crashes that cause injuries and 2x better in avoiding police-reported crashes than human drivers in SF & Phoenix.” https://x.com/Waymo/status/1803095329304088922

8. “By removing vector quantization, our image generator achieves strong results while enjoying the speed advantage of sequence modeling.” https://arxiv.org/abs/2406.11838

9. FlowMM: Generates stable & novel materials efficiently; Predicts crystal structure accurately; Generalizes Riemannian Flow Matching to point clouds with periodic boundaries https://arxiv.org/abs/2406.04713

10. How Do Large Language Models Acquire Factual Knowledge During Pretraining? https://arxiv.org/abs/2406.11813

11. OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI https://gair-nlp.github.io/OlympicArena/

Technology:

1. “If you love the craft of low level computer programming, you will get a weeks worth of dopamine hits watching this video” https://www.youtube.com/watch?v=40JzyaOYJeY

2. “Ultimately, we’re trying to understand one of the most fascinating and complex objects we have in the known universe.” https://www.youtube.com/watch?v=VSG3_JvnCkU

3. A technological explosion 600,000 years ago sheds light on the ability that made us human. Researchers propose that cumulative culture may predate the separation of the Neanderthal and ‘Homo Sapiens’ lineages, and that a common ancestor could have developed it. https://english.elpais.com/science-tech/2024-06-18/a-technological-explosion-600000-years-ago-sheds-light-on-the-ability-that-made-us-human.html

4. TDK claims insane energy density in solid-state battery breakthrough https://arstechnica.com/gadgets/2024/06/tdk-claims-insane-energy-density-in-solid-state-battery-breakthrough/


Links for 2024-06-18

AI:

1. Getting 50% (SoTA) on ARC-AGI with GPT-4o https://www.lesswrong.com/posts/Rdwui3wHxCeKb7feK/getting-50-sota-on-arc-agi-with-gpt-4o

2. Hypothesis Search: Inductive Reasoning with Language Models https://arxiv.org/abs/2309.05660

3. Introducing PRISM-1: Photorealistic reconstruction in static and dynamic scenes https://wayve.ai/thinking/prism-1/

4. Generating audio for video https://deepmind.google/discover/blog/generating-audio-for-video/

5. “LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.” https://news.mit.edu/2024/understanding-visual-knowledge-language-models-0617

6. How A.I. Is Revolutionizing Drug Development https://www.nytimes.com/2024/06/17/business/ai-drugs-development-terray.html [no paywall: https://archive.is/v54rQ]

7. DeepSeek-Coder-V2: First Open Source Model Beats GPT4-Turbo in Coding and Math https://chat.deepseek.com/

8. Microsoft’s Nadella Is Building an AI Empire. OpenAI Was Just the First Step. https://www.wsj.com/tech/ai/microsoft-nadella-openai-inflection-9727e77a [no paywall: https://archive.is/lBmCg]

9. Sycophancy to subterfuge: Investigating reward tampering in language models https://www.anthropic.com/research/reward-tampering

10. Eliezer Yudkowsky, Scott Aaronson, Liv Boeree, and Joscha Bach debate whether the development of AI is changing what it means to be human. https://iai.tv/video/ai-and-the-end-of-humanity?_auid=2020

Miscellaneous:

1. Technologies enable 3D imaging of whole human brain hemispheres at subcellular resolution https://news.mit.edu/2024/technologies-enable-3d-imaging-whole-human-brain-hemispheres-subcellular-resolution-0617

2. Nanotechnology can already puncture cancer cells and drug-resistant bacteria. What will it do next? https://www.newyorker.com/magazine/2024/06/24/rise-of-the-nanomachines [no paywall: https://archive.is/zIXI2]

3. Analysis of a database with 140,000 white Gentiles and 6,900 Jews finds an IQ gap of 9.8-11.5 points. https://www.sebjenseb.net/p/jewish-iq

4. Remarkable termite landscape is tens of thousands of years old: “Namaqualand’s heuweltjies, it turns out, are the world’s oldest inhabited termite mounds. Some date as far back as between 34,000 and 13,000 years.” https://theconversation.com/worlds-oldest-termite-mounds-discovered-in-south-africa-and-theyve-been-storing-precious-carbon-for-thousands-of-years-230988


Gen-3 Alpha: Runway’s new base model for video generation

A step toward building General World Models.

Prompt 1: A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head.

Prompt 2: Close up shot of a living flame wisp darting through a bustling fantasy market at night.

More here: https://runwayml.com/blog/introducing-gen-3-alpha/


Links for 2024-06-17

AI:

1. Nat Friedman interviewed the Sora creators: "Their view is that it is a world simulator...their view is also that Sora today is a GPT-1 scale, not a lot of data, not a lot of compute, and so we should expect absolutely dramatic improvement in the future as they simply scale it up and thirdly that there’s just a lot more video data than there is text data on the Internet…" https://marginalrevolution.com/marginalrevolution/2024/06/nat-friedman-discusses-sora-and-image-wisdom.html

2. Can language models be used as world simulators? Not really. BUT: LLMs are *rapidly* getting better at predicting states from actions (GPT-3.5 (~30-40%) ➡️ GPT-4 (~70%+) ) and predicting environment changes (GPT-3.5 (~0-13%) ➡️ GPT-4 (~20-50%)). https://arxiv.org/abs/2406.06485

3. DenseAV is an algorithm capable of discovering the meaning of language and locations of sounds just by watching unlabeled videos. DenseAV is completely unsupervised and never sees text during its training. https://mhamilton.net/denseav

4. “Neural networks can "grok" abstract math structures like permutation groups. We found the networks decompose the groups into cosets - special subsets - and use this structure to implement the group multiplication. This is a novel and highly parallelizable algorithm!” https://openreview.net/forum?id=hcQfTsVnBo

5. Can ChatGPT help researchers understand how the human brain handles language? https://www.pnas.org/doi/10.1073/pnas.2410196121

6. A Multimodal Generative AI Copilot for Human Pathology [PDF] https://www.nature.com/articles/s41586-024-07618-3_reference.pdf

7. Study demonstrates how AI can develop more personalised cancer treatment strategies https://www.ox.ac.uk/news/2024-06-17-study-demonstrates-how-ai-can-develop-more-personalised-cancer-treatment-strategies

8. Clifford-Steerable Convolutional Neural Networks https://arxiv.org/abs/2402.14730

9. An AI Bot Is (Sort of) Running for Mayor in Wyoming — ‘I realized that this entity is way smarter than me, and more importantly, way better than some of the outward-facing public servants I see,’ he says. https://www.wired.com/story/ai-bot-running-for-mayor-wyoming/ [no paywall: https://archive.is/U2LyC]

Gaming:

1. “Kasparov versus the World was a game of chess played in 1999 over the Internet. It was a consultation game, in which a World Team of thousands decided each move for the black pieces by plurality vote, while Garry Kasparov conducted the white pieces by himself. More than 50,000 people from over 75 countries participated in the game.” https://en.m.wikipedia.org/wiki/Kasparov_versus_the_World

Politics:

1. What Natalists Should Learn from LGBT — "The LGBT explosion teaches us that high doses of sheer enthusiastic social approval are strong enough to move mountains. Money matters, but a full-blown fertility cult culture — complete with Parent Pride Parades — plausibly could work as well or better." https://www.betonit.ai/p/lessons-of-the-lgbt-explosion

2. New study shows the mean IQ of US college students has been dropping by 0.2 points per year since the mid-20th century.


Видео недоступно для предпросмотра
Смотреть в Telegram
Dr Irving Finkel explains the oldest playable board game in the world which is almost 5,000 years old!

https://en.wikipedia.org/wiki/Royal_Game_of_Ur

864 0 22 7 13

I've posted these papers here before, but just in case you missed it: https://x.com/teortaxesTex/status/1802128370861232374

Of course, much more progress is being made on other fronts as well. There is no sign of a new AI winter. On the contrary, progress is accelerating.

Examples from the past few days:

- Neural algorithmic reasoners https://arxiv.org/abs/2406.09308
- LLMs discovering better algorithms for training LLMs https://sakana.ai/llm-squared/
- Neural network potentials https://pubs.acs.org/doi/10.1021/acsphyschemau.4c00004

947 0 21 1 12

Links for 2024-06-15

AI:

1. Introducing Lamini Memory Tuning: 95% LLM Accuracy, 10x Fewer Hallucinations https://www.lamini.ai/blog/lamini-memory-tuning

2. Graduate student Chenguang Li created a hybrid between artificial intelligence and biological intelligence by combining RL, optogenetics, and the C.elegans worm. [PDF] https://klab.tch.harvard.edu/publications/PDFs/gk8172.pdf

3. OpenVLA: An Open-Source Vision-Language-Action Model https://openvla.github.io/

4. How can we ensure that LLM-generated code reliably does precisely what it is supposed to do? DafnyBench: A Benchmark for Formal Software Verification https://arxiv.org/abs/2406.08467

5. China Is Testing More Driverless Cars Than Any Other Country https://www.nytimes.com/2024/06/13/business/china-driverless-cars.html [No paywall: https://archive.is/l6wNU]

6. This humanoid robot can drive cars — sort of https://techcrunch.com/2024/06/12/this-humanoid-robot-can-drive-cars-sort-of/

7. Step-by-Step Diffusion: An Elementary Tutorial https://arxiv.org/abs/2406.08929

8. Stable Minima Cannot Overfit in Univariate ReLU Networks: Generalization by Large Step Sizes — “For the first time, we understand how *flatness*, *edge-of-stability* and *large stepsize* imply (near-optimal) generalization.” https://arxiv.org/abs/2406.06838

9. This AI-Powered Exoskeleton Could Speed Adoption by the Masses https://singularityhub.com/2024/06/14/this-ai-powered-exoskeleton-could-speed-adoption-by-the-masses/

10. OpenAI's revenue is reportedly booming. The ChatGPT maker's annualized revenue is $3.4 billion according to a new report. https://www.engadget.com/openais-revenue-is-reportedly-booming-230324957.html

11. Elon Musk abruptly drops lawsuit against OpenAI and CEO Sam Altman https://www.businesstoday.in/technology/news/story/elon-musk-abruptly-drops-lawsuit-against-openai-and-ceo-sam-altman-432981-2024-06-12

Miscellaneous:

1. Why Biosecurity Matters – What are We Protecting Against? https://www.mackenziemorehead.com/part-i-why-biosecurity-matters-what-are-we-protecting-against/

2. Wikipedia Chemical Structure Explorer https://wikipedia.cheminfo.org/

3. The Big Bang: after 13.8 billion years, its leftover glow still hasn't faded away. Unlike every other light source in the Universe, there's a profound reason why it persists. https://bigthink.com/starts-with-a-bang/big-bang-fade-away/

China:

1. “17-year old vocational high school student Jiang Ping, who spends her days studying fashion design and literally making clothes, places 12th out of 801 finalists for the global Alibaba Math Contest…When asked if she likes fashion design more or Math more, she still said fashion design, and said that she saw her favorite, partial differential equations, present everywhere in fashion designs.” https://x.com/AnonYalie/status/1801735366464180456

2. Is China’s scientific progress real? Yes, according to the Nature Index, which looks at contributors to 145 top international journals. https://x.com/kyleichan/status/1801594688719159683


Links for 2024-06-14

AI:

1. Google presents Transformers meet Neural Algorithmic Reasoners — Significant gains over Transformer for algorithmic reasoning, both in and out of distribution. NARs are capable of holding perfect generalization even on 6× larger inputs than ones seen in the training set, for highly complex algorithmic tasks with long rollouts https://arxiv.org/abs/2406.09308

2. “Giving LLMs search (ability to think for a long time) might kick off artificial superintelligence this year, and nobody is paying attention.” https://yellow-apartment-148.notion.site/AI-Search-The-Bitter-er-Lesson-44c11acd27294f4495c3de778cd09c8d

3. “Here's another pretty incredible example of neural network potentials extrapolating outside their training data in a way I wouldn't expect. We were simulating an electrolyte with an NNP when this happened” https://x.com/TimothyDuignan/status/1801223338909376542

4. Sketching as a Visual Chain of Thought for Multimodal LLMs — "Substantially improves performance on all tasks over strong base models with no sketching, yielding an average gain of 12.7% on math tasks, and 8.6% on vision tasks." https://arxiv.org/abs/2406.09403

5. A US flour firm is using AI weather forecasting to help its farmers decide when to plant and harvest wheat crops, while another is using an AI algorithm to help genetically modify plant varieties in order to make them more resilient. https://www.bbc.com/news/articles/c9xxjx7e2gjo

6. Samba 3.8B, a simple Mamba+Sliding Window Attention architecture that outperforms Phi3-mini on major benchmarks (e.g., MMLU, GSM8K and HumanEval) by a large margin. And it has an infinite context length with linear complexity. https://arxiv.org/abs/2406.07522

7. How Amazon blew Alexa’s shot to dominate AI, according to more than a dozen employees who worked on it https://fortune.com/2024/06/12/amazon-insiders-why-new-alexa-llm-generative-ai-conversational-chatbot-missing-in-action/

8. Former NSA director on the OpenAI board. https://openai.com/index/openai-appoints-retired-us-army-general/

9. Trump on AI https://x.com/jam3scampbell/status/1801441008565293199

Energy:

1. “Nuclear power is expensive largely because nuclear safety is regulated without regard to cost-benefit in a way that isn’t true for the health risks of coal, oil, gas or a million other “useful but potentially hazardous” things.” https://thebreakthrough.org/journal/no-20-spring-2024/its-the-regulation-stupid

2. This Engineer’s Solar Panels Are Breaking Efficiency Records https://spectrum.ieee.org/solar-panels-breaking-efficiency-records

Fertility:

1. Why Fertility is Collapsing: Shocking Stats with @MoreBirths https://www.youtube.com/watch?v=BOR8pTXb3zI

2. Robin Hanson: The Global Fertility Collapse: Consequences and Policy Solutions. https://upbeatglobalist.substack.com/p/robin-hanson-part-1

Politics:

1. The new front in China’s cyber campaign against America — Volt Typhoon: China's cyber-intrusions into US critical national infrastructure, allegedly with a view to wartime sabotage; why the campaign differs from earlier ones; and norms around this sort of "pre-positioning". https://www.economist.com/international/2024/06/13/the-new-front-in-chinas-cyber-campaign-against-america [no paywall: https://archive.is/yUsu4]


Видео недоступно для предпросмотра
Смотреть в Telegram
Through imitation, this humanoid robot learns to fold clothes👕, unload warehouse racks📦, greet other robots🤖, and more!

HumanPlus! A full-stack system for humanoids to learn motion and autonomous skills from human data. Open-sourced hardware design and code!

Project site: https://humanoid-ai.github.io/


Links for 2024-06-13

AI:

1. Can LLMs invent better ways to train LLMs? “As LLMs become better at generating hypotheses and code, a fascinating possibility emerges: using AI to advance AI itself! As a first step, we got LLMs to discover better algorithms for training LLMs that align with human preferences.” https://sakana.ai/llm-squared/

2. “TiTok not only outperforms state-of-the-art diffusion model…but also reduces the image tokens by 64 × , leading to 410 × faster generation process.” https://yucornetto.github.io/projects/titok.html

3. A team at Google DeepMind has built a ‘virtual rodent’, in which an artificial neural network actuates a biomechanically realistic model of the rat. The researchers found that activations in the virtual control network accurately predicted neural activity measured from the brains of real rats producing the same behaviors. https://www.sciencedaily.com/releases/2024/06/240611130418.htm

4. Google presents TORAX, the first end-to-end differentiable simulator of tokamak heat transport. Already, major groups in nuclear fusion research are using TORAX for applications around optimizing and simulating tokamak performance. https://arxiv.org/abs/2406.06718

5. “New paper just dropped, showing how to greatly increase math scores on LLMs by combining monte-carlo tree search (MCTS) with a language model. Nice! But... what if instead, we simply tell the LLM to read the paper, and *pretend* it followed those steps?” https://x.com/jeremyphoward/status/1801037736968913128

6. How to leverage AI-synthesized data without catastrophic degradation? Rank-and-prune feedback, from humans or even weaker models, provably restores and even surpasses original performance! https://arxiv.org/abs/2406.07515

7. The Prompt Report: A Systematic Survey of Prompting Techniques https://arxiv.org/abs/2406.06608

8. Towards Lifelong Learning of LLMs https://arxiv.org/abs/2406.06391

Compute:

1. Nvidia Conquers Latest AI Tests​ — GPU maker tops new MLPerf benchmarks on graph neural nets and LLM fine-tuning https://spectrum.ieee.org/mlperf-nvidia-conquers

2. Giant Chips Give Supercomputers a Run for Their Money — Cerebras’s wafer-scale chips excel at molecular dynamics and AI inference https://spectrum.ieee.org/cerebras-wafer-scale-engine

Technology:

1. Customizable fibre networks to create on-skin electrodes that can record electrocardiogram and electromyography signals, skin-gated organic electrochemical transistors and augmented touch and plant interfaces. https://www.nature.com/articles/s41928-024-01174-4

2. Scientists Achieve Million-Fold Energy Enhancement in Diamond Optical Antennas https://pme-cms.prod.uchicago.edu/news/quantum-optical-antennas-provide-more-powerful-measurements-atomic-level

Miscellaneous:

1. Paul Erdős once took a bet that he could quit taking amphetamines for a month: "You've showed me I'm not an addict. But I didn't get any work done. I'd get up in the morning and stare at a blank piece of paper. I'd have no ideas, just like an ordinary person. You've set mathematics back a month." https://x.com/cremieuxrecueil/status/1800985280658149563

2. On Self-Delusion and Bounded Rationality https://www.scottaaronson.com/writings/selfdelusion.html

Politics:

1. “These have undoubtedly been the wildest 72 hours in French politics in my lifetime. Pretty incredible stuff.” https://x.com/RnaudBertrand/status/1801114239572328663




Links for 2024-06-12

AI:

1. ARC Prize: a $1,000,000 competition to create an AI that can adapt to novelty and solve simple reasoning problems. https://arcprize.org/ (Interview: https://www.dwarkeshpatel.com/p/francois-chollet)

2. Matrix Multiplication-Free LLMs cuts memory usage by 10x, boosts training speed by 25.6% https://arxiv.org/abs/2406.02528 (posted previously, signal boosting)

3. ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization — 80% memory and energy reductions over the original LLMs https://arxiv.org/abs/2406.05981

4. New algorithm discovers language just by watching videos https://news.mit.edu/2024/denseav-algorithm-discovers-language-just-watching-videos-0611

5. Google presents Improve Mathematical Reasoning in Language Models by Automated Process Supervision https://arxiv.org/abs/2406.06592

6. Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation https://github.com/FoundationVision/LlamaGen

7. BERTs are Generative In-Context Learners https://arxiv.org/abs/2406.04823

8. RLHF heavily reduces LLM creativity and output variety. https://arxiv.org/abs/2406.05587

9. Google presents Towards a Personal Health Large Language Model: A version of Gemini fine-tuned for text understanding and reasoning over numerical time-series personal health data for applications in sleep and fitness https://arxiv.org/abs/2406.06474

10. How to Build an AI Data Center https://www.construction-physics.com/p/how-to-build-an-ai-data-center

11. Whisper can run real-time transcription locally on your browser https://huggingface.co/spaces/Xenova/realtime-whisper-webgpu

Biotechnology:

1. How personalized cancer vaccines could keep tumours from coming back https://www.nature.com/articles/d41586-024-01717-x

2. A startup engineering microbes for Mars https://www.pioneer-labs.org/

Miscellaneous:

1. African elephants address one another with individually specific name-like calls https://rdcu.be/dKsyl

Archaeology:

1. “By the late 3rd century AD, all essential elements for constructing a steam engine were known by Roman engineers: steam power (in Hero's aeolipile), the crank and connecting rod mechanism (in the Hierapolis sawmill), the cylinder and piston (in metal force pumps), non-return valves (in water pumps) and gearing (in water mills and clocks)” https://en.m.wikipedia.org/wiki/Ancient_Roman_technology

2. Nemi ships: “…the larger ship was an elaborate floating palace, which contained quantities of marble, mosaic floors, heating and plumbing, and amenities such as baths. Both ships featured technology thought to have been developed historically later.” https://en.m.wikipedia.org/wiki/Nemi_ships


Links for 2024-06-11

AI:

1. Self-Tuning: Instructing LLMs to Effectively Acquire New Knowledge through Self-Teaching https://arxiv.org/abs/2406.06326

2. SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals — The proposed approach adaptively breaks down a high-level goal into a tree structure of practical subgoals during interaction with the environment. https://arxiv.org/abs/2406.04784

3. Husky: A Unified, Open-Source Language Agent for Multi-Step Reasoning — “Despite using 7B models, Husky matches or even exceeds frontier LMs such as GPT-4 on these tasks, showcasing the efficacy of our holistic approach in addressing complex reasoning problems.” https://arxiv.org/abs/2406.06469

4. “You can now build AI Web Agents that outperform Gemini and ChatGPT in information retrieval and share them with your community in less than 10 lines of code!” https://github.com/lavague-ai/LaVague

5. The Apple Intelligence announcement: “We're quickly heading into a world where you can open up your phone and just say stuff. It talks back and it knows you. And it just works.” https://x.com/karpathy/status/1800242310116262150

6. NVIDIA’s CEO, Jensen Huang, gives his AI predictions. — “All the factories will be robotic,” Huang said. “The factories will orchestrate robots, and those robots will be building products that are robotic.” https://qz.com/ai-next-wave-robots-nvidia-jensen-huang-blackwell-rubin-1851515953

7. Microsoft presents VALL-E 2: Achieves human parity zero-shot TTS performance for the first time https://arxiv.org/abs/2406.05370

8. “We reproduce the GPT-2 (124M) from scratch. This video covers the whole process: First we build the GPT-2 network, then we optimize its training to be really fast, then we set up the training run following the GPT-2 and GPT-3 paper and their hyperparameters…” https://www.youtube.com/watch?v=l8pRSuU81PU

9. “So my current estimate is 3-5 years for AGI.” https://nonint.com/2024/06/03/general-intelligence-2024/

Neuroscience:

1. “Many neurons exhibit “mixed selectivity,” meaning they can integrate multiple inputs and participate in multiple computations. Mechanisms such as oscillations and neuromodulators recruit their participation and tune them to focus on the relevant information.” https://picower.mit.edu/news/how-brain-flexible-enough-complex-world-without-being-thrown-chaos

2. Many children struggle to learn math—but it’s not well understood why. A new study finds links between brain structure, genes, and individual differences in mathematical abilities and predicts learning outcomes in two math tutoring cohorts. https://scim.ag/7gr

3. Effective cryopreservation of human brain tissue and neural organoids https://www.cell.com/cell-reports-methods/fulltext/S2667-2375(24)00121-8

Miscellaneous:

1. University of Houston lab reports breakthrough in cancer-detecting technology https://houston.innovationmap.com/cellchorus-university-of-houston-nature-cancer-2668463472.html

Politics:

1. “Any system that is not explicitly pro-Weird Nerd will turn anti-Weird Nerd pretty quickly.” https://www.writingruxandrabio.com/p/the-weird-nerd-comes-with-trade-offs

Показано 20 последних публикаций.