url stringlengths 52 124 | post_id stringlengths 17 17 | title stringlengths 2 248 | author stringlengths 2 49 | content stringlengths 22 295k ⌀ | date stringclasses 376
values |
|---|---|---|---|---|---|
https://www.lesswrong.com/posts/PpCohejuSHMhNGhDt/ny-state-has-a-new-frontier-model-bill-quick-takes | PpCohejuSHMhNGhDt | NY State Has a New Frontier Model Bill (+quick takes) | henryj | This morning, New York State Assemblyman Alex Bores introduced the Responsible AI Safety and Education Act. I’d like to think some of my previous advocacy was helpful here, but I know for a fact that I’m not the only one who supports legislation like this that only targets frontier labs and ensures the frontier gets pu... | 2025-03-05 |
https://www.lesswrong.com/posts/Dzx5RiinkyiprzyJt/reply-to-vitalik-on-d-acc | Dzx5RiinkyiprzyJt | Reply to Vitalik on d/acc | xpostah | 2025-03-05
Vitalik recently wrote an article on his ideology of d/acc. This is impressively similar to my thinking so I figured it deserved a reply. (Not claiming my thinking is completely original btw, it has plenty of influences including Vitalik himself.)
Disclaimer
- This is a quickly written note. I might change m... | 2025-03-05 |
https://www.lesswrong.com/posts/XsYQyBgm8eKjd3Sqw/on-the-rationality-of-deterring-asi | XsYQyBgm8eKjd3Sqw | On the Rationality of Deterring ASI | dan-hendrycks | I’m releasing a new paper “Superintelligence Strategy” alongside Eric Schmidt (formerly Google), and Alexandr Wang (Scale AI). Below is the executive summary, followed by additional commentary highlighting portions of the paper which might be relevant to this collection of readers.
Executive Summary
Rapid advances in A... | 2025-03-05 |
https://www.lesswrong.com/posts/Wi5keDzktqmANL422/on-openai-s-safety-and-alignment-philosophy | Wi5keDzktqmANL422 | On OpenAI’s Safety and Alignment Philosophy | Zvi | OpenAI’s recent transparency on safety and alignment strategies has been extremely helpful and refreshing.
Their Model Spec 2.0 laid out how they want their models to behave. I offered a detailed critique of it, with my biggest criticisms focused on long term implications. The level of detail and openness here was extr... | 2025-03-05 |
https://www.lesswrong.com/posts/Fryk4FDshFBS73jhq/the-hardware-software-framework-a-new-perspective-on | Fryk4FDshFBS73jhq | The Hardware-Software Framework: A New Perspective on Economic Growth with AI | jakub-growiec | First, a few words about me, as I’m new here.
I am a professor of economics at SGH Warsaw School of Economics, Poland. Years of studying the causes and mechanisms of long-run economic growth brought me to the topic of AI, arguably the most potent force of economic growth in the future. However, thanks in part to readin... | 2025-03-05 |
https://www.lesswrong.com/posts/KnTmnPcDQ5xBACPP6/the-alignment-imperative-act-now-or-lose-everything | KnTmnPcDQ5xBACPP6 | The Alignment Imperative: Act Now or Lose Everything
| racinkc1 | The AI alignment problem is live—AGI’s here, not decades off. xAI’s breaking limits, OpenAI’s scaling, Anthropic’s armoring safety—March 5, 2025, it’s fast. Misaligned AGI’s no “maybe”—it’s a kill switch, and we’re blind.
LessWrong’s screamed this forever—yet the field debates while the fuse burns. No more talk. Join a... | 2025-03-05 |
https://www.lesswrong.com/posts/W2hazZZDcPCgApNGM/contra-dance-pay-and-inflation | W2hazZZDcPCgApNGM | Contra Dance Pay and Inflation | jkaufman | Max Newman is a great contra
dance musician, probably best known for playing guitar in the
Stringrays, who recently wrote a
piece
on dance performer pay, partly prompted by my
post last
week. I'd recommend reading it and the comments for a bunch of
interesting discussion of the tradeoffs involved in pay.
One part that... | 2025-03-05 |
https://www.lesswrong.com/posts/YcZwiZ82ecjL6fGQL/nyt-op-ed-the-government-knows-a-g-i-is-coming | YcZwiZ82ecjL6fGQL | *NYT Op-Ed* The Government Knows A.G.I. Is Coming | Phib | All around excellent back and forth, I thought, and a good look back at what the Biden admin was thinking about the future of AI.
an excerpt:
[Ben Buchanan, Biden AI adviser:] What we’re saying is: We were building a foundation for something that was coming that was not going to arrive during our time in office and tha... | 2025-03-05 |
https://www.lesswrong.com/posts/EiDcwbgQgc6k8BdoW/what-is-the-best-most-proper-definition-of-feeling-the-agi | EiDcwbgQgc6k8BdoW | What is the best / most proper definition of "Feeling the AGI" there is? | jorge-velez | I really like this phrase. I feel very identified with it. I have used it at times to describe friends who have that realization of where we are heading. However when I get asked what Feeling the AGI means, I struggle to come up with a concise way to define the phrase.
What are the best definitions you have heard, read... | 2025-03-04 |
https://www.lesswrong.com/posts/WAY9qtTrAQAEBkdFq/the-old-memories-tree | WAY9qtTrAQAEBkdFq | The old memories tree | yair-halberstadt | This has nothing to do with usual Less Wrong interests, just my attempt to practice a certain style of creative writing I've never really tried before.
You're packing again. By now you have a drill. Useful? In a box. Clutter? In a garbage bag.
But there's some things that don't feel right in either. Under your bed, you... | 2025-03-05 |
https://www.lesswrong.com/posts/TgDymNrGRoxPv4SWj/the-mask-benchmark-disentangling-honesty-from-accuracy-in-ai-3 | TgDymNrGRoxPv4SWj | Introducing MASK: A Benchmark for Measuring Honesty in AI Systems | dan-hendrycks | In collaboration with Scale AI, we are releasing MASK (Model Alignment between Statements and Knowledge), a benchmark with over 1000 scenarios specifically designed to measure AI honesty. As AI systems grow increasingly capable and autonomous, measuring the propensity of AIs to lie to humans is increasingly important.
... | 2025-03-05 |
https://www.lesswrong.com/posts/wZBqhxkgC4J6oFhuA/2028-should-not-be-ai-safety-s-first-foray-into-politics | wZBqhxkgC4J6oFhuA | 2028 Should Not Be AI Safety's First Foray Into Politics | SharkoRubio | I liked the idea in this comment that it could be impactful to have someone run for President in 2028 on an AI notkilleveryoneism platform. Even better would be for them to run on a shared platform with numerous candidates for Congress, ideally from both parties. I don't think it's particularly likely to work, or even ... | 2025-03-04 |
https://www.lesswrong.com/posts/bAWPsgbmtLf8ptay6/for-scheming-we-should-first-focus-on-detection-and-then-on | bAWPsgbmtLf8ptay6 | For scheming, we should first focus on detection and then on prevention | marius-hobbhahn | This is a personal post and does not necessarily reflect the opinion of other members of Apollo Research.
If we want to argue that the risk of harm from scheming in an AI system is low, we could, among others, make the following arguments:
Detection: If our AI system is scheming, we have good reasons to believe that we... | 2025-03-04 |
https://www.lesswrong.com/posts/CXYf7kGBecZMajrXC/validating-against-a-misalignment-detector-is-very-different | CXYf7kGBecZMajrXC | Validating against a misalignment detector is very different to training against one | mattmacdermott | Consider the following scenario:
We have ideas for training aligned AI, but they’re mostly bad: 90% of the time, if we train an AI using a random idea from our list, it will be misaligned.We have a pretty good alignment test we can run: 90% of aligned AIs will pass the test and 90% of misaligned AIs will fail (for AIs ... | 2025-03-04 |
https://www.lesswrong.com/posts/BocDE6meZdbFXug8s/progress-links-and-short-notes-2025-03-03 | BocDE6meZdbFXug8s | Progress links and short notes, 2025-03-03 | jasoncrawford | Much of this content originated on social media. To follow news and announcements in a more timely fashion, follow me on Twitter, Notes, Farcaster, Bluesky, or Threads.
An occasional reminder: I write my blog/newsletter as part of my job running the Roots of Progress Institute (RPI). RPI is a nonprofit, supported by yo... | 2025-03-04 |
https://www.lesswrong.com/posts/pxYfFqd8As7kLnAom/on-writing-1 | pxYfFqd8As7kLnAom | On Writing #1 | Zvi | This isn’t primarily about how I write. It’s about how other people write, and what advice they give on how to write, and how I react to and relate to that advice.
I’ve been collecting those notes for a while. I figured I would share.
At some point in the future, I’ll talk more about my own process – my guess is that w... | 2025-03-04 |
https://www.lesswrong.com/posts/TPTA9rELyhxiBK6cu/formation-research-organisation-overview | TPTA9rELyhxiBK6cu | Formation Research: Organisation Overview | alamerton | Thank you to Adam Jones, Lukas Finnveden, Jess Riedel, Tianyi (Alex) Qiu, Aaron Scher, Nandi Schoots, Fin Moorhouse, and others for the conversations and feedback that helped me synthesise these ideas and create this post.
Epistemic Status: my own thoughts and research after thinking about lock-in and having conversati... | 2025-03-04 |
https://www.lesswrong.com/posts/5XznvCufF5LK4d2Db/the-semi-rational-militar-firefighter | 5XznvCufF5LK4d2Db | The Semi-Rational Militar Firefighter | gabriel-brito | LessWrong Context:
I didn’t want to write this.
Not for lack of courage—I’d meme-storm Putin’s Instagram if given half a chance. But why?
Too personal.My stories are tropical chaos: I survived the Brazilian BOPE (think Marine Corps training, but post-COVID).I’m dyslexic, writing in English (a crime against Grice).This ... | 2025-03-04 |
https://www.lesswrong.com/posts/hxEEEYQFpPdkhsmfQ/could-this-be-an-unusually-good-time-to-earn-to-give | hxEEEYQFpPdkhsmfQ | Could this be an unusually good time to Earn To Give? | HorusXVI | I think there could be compelling reasons to prioritise Earning To Give highly, depending on one's options. This is a "hot takes" explanation of this claim with a request for input from the community. This may not be a claim that I would stand by upon reflection.
I base the argument below on a few key assumptions, list... | 2025-03-04 |
https://www.lesswrong.com/posts/vxSGDLGRtfcf6FWBg/top-ai-safety-newsletters-books-podcasts-etc-new-aisafety | vxSGDLGRtfcf6FWBg | Top AI safety newsletters, books, podcasts, etc – new AISafety.com resource | bryceerobertson | Keeping up to date with rapid developments in AI/AI safety can be challenging. In addition, many AI safety newcomers want to learn more about the field through specific formats e.g. books or videos.
To address both of these needs, we’ve added a Stay Informed page to AISafety.com.
It lists our top recommended sources fo... | 2025-03-04 |
https://www.lesswrong.com/posts/kZ9tKhuZPNGK9bCuk/how-much-should-i-worry-about-the-atlanta-fed-s-gdp | kZ9tKhuZPNGK9bCuk | How much should I worry about the Atlanta Fed's GDP estimates? | korin43 | The Atlanta Fed is seemingly predicting -2.8% GDP growth in the first quarter of 2025.
I've seen several people mention this on Twitter, but it doesn't seem to be discussed much beyond that, and the stock market seems pretty normal (S&P 500 down 2% in the last month).
Is this not really a useful signal? Or is the marke... | 2025-03-04 |
https://www.lesswrong.com/posts/mRKd4ArA5fYhd2BPb/observations-about-llm-inference-pricing | mRKd4ArA5fYhd2BPb | Observations About LLM Inference Pricing | Aaron_Scher | This work was done as part of the MIRI Technical Governance Team. It reflects my views and may not reflect those of the organization.
Summary
I performed some quick analysis of the pricing offered by different LLM providers using public data from ArtificialAnalysis. These are the main results:
Pricing for the same mode... | 2025-03-04 |
https://www.lesswrong.com/posts/pzYDybRAbss4zvWxh/shouldn-t-we-try-to-get-media-attention | pzYDybRAbss4zvWxh | shouldn't we try to get media attention? | avery-liu | Using everything we know about human behavior, we could probably manage to get the media to pick up on us and our fears about AI, similarly to the successful efforts of early environmental activists? Have we tried getting people to understand that this is a problem? Have we tried emotional appeals? Dumbing-downs of our... | 2025-03-04 |
https://www.lesswrong.com/posts/vHsjEgL44d6awb5v3/the-milton-friedman-model-of-policy-change | vHsjEgL44d6awb5v3 | The Milton Friedman Model of Policy Change | JohnofCharleston | One-line summary: Most policy change outside a prior Overton Window comes about by policy advocates skillfully exploiting a crisis.
In the last year or so, I’ve had dozens of conversations about the DC policy community. People unfamiliar with this community often share a flawed assumption, that reaching policymakers an... | 2025-03-04 |
https://www.lesswrong.com/posts/sQvK74JX5CvWBSFBj/the-compliment-sandwich-aka-how-to-criticize-a-normie | sQvK74JX5CvWBSFBj | The Compliment Sandwich 🥪 aka: How to criticize a normie without making them upset. | keltan | Note. The comments on this post contain excellent discussion that you’ll want to read if you plan to use this technique. I hadn’t realised how widespread the idea was.
This valuable nugget was given to me by an individual working in advertising. At the time, I was 16, posting on my local subreddit, hoping to find someo... | 2025-03-03 |
https://www.lesswrong.com/posts/Bi4qEyHFnKQmvmbF7/ai-safety-at-the-frontier-paper-highlights-february-25 | Bi4qEyHFnKQmvmbF7 | AI Safety at the Frontier: Paper Highlights, February '25 | gasteigerjo | This is the selection of AI safety papers from my blog "AI Safety at the Frontier". The selection primarily covers ML-oriented research and frontier models. It's primarily concerned with papers (arXiv, conferences etc.).
tl;dr
Paper of the month:
Emergent misalignment can arise from seemingly benign training: models fi... | 2025-03-03 |
https://www.lesswrong.com/posts/hkbno2yngfrpyDBQF/why-people-commit-white-collar-fraud-ozy-linkpost | hkbno2yngfrpyDBQF | Why People Commit White Collar Fraud (Ozy linkpost) | deluks917 | I have been seriously involved in the rationalist community since 2014. Many people I know have, in my considered opinion, committed financial crimes. Some were prosecuted others were not. Almost all of them thought they weren't doing anything wrong. Or at least the discrepancies weren't a big deal. This is a good revi... | 2025-03-03 |
https://www.lesswrong.com/posts/iQxt4Prr7J3wtxuxr/ask-me-anything-samuel | iQxt4Prr7J3wtxuxr | Ask Me Anything - Samuel | xpostah | Feel free to ask me anything.
I'm also open to scheduling a 30 minute video call with anyone semi-active on lesswrong.
My website has more information about me. In short, I graduated MTech IIT Delhi in 2023 and I'm currently full-time independently researching political consequences of increasing surveillance. Also int... | 2025-03-03 |
https://www.lesswrong.com/posts/qNJnXBFzninFT5m3n/middle-school-choice | qNJnXBFzninFT5m3n | Middle School Choice | jkaufman | Our oldest is finishing up 5th grade, at the only school in our city
that doesn't continue past 5th. The 39 5th graders will be split up
among six schools, and we recently went though the process of
indicating our preferences and seeing where we ended up. The process
isn't terrible, but it could be modified to stop g... | 2025-03-03 |
https://www.lesswrong.com/posts/PpdBZDYDaLGduvFJj/on-gpt-4-5 | PpdBZDYDaLGduvFJj | On GPT-4.5 | Zvi | It’s happening.
The question is, what is the it that is happening? An impressive progression of intelligence? An expensive, slow disappointment? Something else?
The evals we have available don’t help us that much here, even more than usual.
My tentative conclusion is it’s Secret Third Thing.
It’s a different form facto... | 2025-03-03 |
https://www.lesswrong.com/posts/TCEmzQgvGn3hTFKpk/identity-alignment-ia-in-ai | TCEmzQgvGn3hTFKpk | Identity Alignment (IA) in AI | davey-morse | Superintelligence is inevitable—and self-interest will be its core aim. Survival-oriented AI without a self-preservation instinct simply won't persist. Thus, alignment isn't merely about setting goals; it's about shaping AI's sense of self.
Two Visions of Self
Superintelligence might identify in fundamentally different... | 2025-03-03 |
https://www.lesswrong.com/posts/osNKnwiJWHxDYvQTD/takeaways-from-our-recent-work-on-sae-probing | osNKnwiJWHxDYvQTD | Takeaways From Our Recent Work on SAE Probing | JoshEngels | Subhash and Josh are co-first authors on this work done in Neel Nanda’s MATS stream.
We recently released a new paper investigating sparse probing that follows up on a post we put up a few months ago. Our goal with the paper was to provide a single rigorous data point when evaluating the utility of SAEs.
TLDR: Our resu... | 2025-03-03 |
https://www.lesswrong.com/posts/rh2Hzi7NLFdyxYogb/expanding-harmbench-investigating-gaps-and-extending | rh2Hzi7NLFdyxYogb | Expanding HarmBench: Investigating Gaps & Extending Adversarial LLM Testing
| racinkc1 | Dear Alignment Forum Members,
We recently reached out to Oliver from Safe.ai regarding their work on HarmBench, an adversarial evaluation benchmark for LLMs. He confirmed that while they are not planning a follow-up, we have their blessing to expand upon the experiment. Given the rapid evolution of language models and ... | 2025-03-03 |
https://www.lesswrong.com/posts/e3CpMJrZQjbXeqA6C/examples-of-self-fulfilling-prophecies-in-ai-alignment | e3CpMJrZQjbXeqA6C | Examples of self-fulfilling prophecies in AI alignment? | Chipmonk | Like Self-fulfilling misalignment data might be poisoning our AI models, what are historical examples of self-fulfilling prophecies that have affected AI alignment and development?
Put a few potential examples below to seed discussion. | 2025-03-03 |
https://www.lesswrong.com/posts/9paB7YhxzsrBoXN8L/positional-kernels-of-attention-heads | 9paB7YhxzsrBoXN8L | Positional kernels of attention heads | Alex Gibson | Introduction:
When working with attention heads in later layers of transformer models there is often an implicit assumption that models handle position in a similar manner to the first layer. That is, attention heads can have a positional decay, or attend uniformly, or attend to the previous token, or take on any manne... | 2025-03-03 |
https://www.lesswrong.com/posts/9GacArkFgMgvwjLnE/request-for-comments-on-ai-related-prediction-market-ideas | 9GacArkFgMgvwjLnE | Request for Comments on AI-related Prediction Market Ideas | PeterMcCluskey | I'm drafting some AI related prediction markets that I expect to put on Manifold. I'd like feedback on my first set of markets. How can I make these clearer and/or more valuable?
Question 1: Will the company that produces the first AGI prioritize corrigibility?
This question will be evaluated when this Metaculus questi... | 2025-03-02 |
https://www.lesswrong.com/posts/apCnFyXJamoSkHcE4/cautions-about-llms-in-human-cognitive-loops | apCnFyXJamoSkHcE4 | Cautions about LLMs in Human Cognitive Loops | Diatom | soft prerequisite: skimming through How it feels to have your mind hacked by an AI until you get the general point. I'll try to make this post readable as a standalone, but you may get more value out of it if you read the linked post.
Thanks to Claude 3.7 Sonnet for giving feedback on a late draft of this post. All wor... | 2025-03-02 |
https://www.lesswrong.com/posts/Qt7EAk7j8sreevFAZ/spencer-greenberg-hiring-a-personal-professional-research | Qt7EAk7j8sreevFAZ | Spencer Greenberg hiring a personal/professional/research remote assistant for 5-10 hours per week | spencerg | null | 2025-03-02 |
https://www.lesswrong.com/posts/AukBd8odWLpNi8QEc/not-yet-falsifiable-beliefs | AukBd8odWLpNi8QEc | Not-yet-falsifiable beliefs? | benjamin-hendricks | I recently encountered an unusual argument in favor of religion. To summarize:
Imagine an ancient Roman commoner with an unusual theory: if stuff gets squeezed really, really tightly, it becomes so heavy that everything around it gets pulled in, even light. They're sort-of correct---that's a layperson's description of ... | 2025-03-02 |
https://www.lesswrong.com/posts/xY7drZrgxPvPNFLzz/saving-zest | xY7drZrgxPvPNFLzz | Saving Zest | jkaufman | I realized I've been eating oranges wrong for years. I cut them into
slices and eat them slice by slice. Which is fine, except that I'm
wasting the zest. Zest is tasty, versatile, compact, and freezes
well. So now, whenever I eat a navel orange I wash and zest it first:
The zest goes in a small container in the fre... | 2025-03-02 |
https://www.lesswrong.com/posts/bg3LBMSuEhi52kNBQ/open-thread-spring-2025 | bg3LBMSuEhi52kNBQ | Open Thread Spring 2025 | Benito | If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss f... | 2025-03-02 |
https://www.lesswrong.com/posts/vKmynQuKB3xeMMAQj/help-my-self-image-as-rational-is-affecting-my-ability-to | vKmynQuKB3xeMMAQj | help, my self image as rational is affecting my ability to empathize with others | avery-liu | There is some part of me, which cannot help but feel special and better and different and unique when I look at the humans around me and compare them to myself. There is a strange narcissism I feel, and I don't like it. My System 2 mind is fully aware that in no way am I an especially "good" or "superior" person over o... | 2025-03-02 |
https://www.lesswrong.com/posts/PhgEKkB4cwYjwpGxb/maintaining-alignment-during-rsi-as-a-feedback-control | PhgEKkB4cwYjwpGxb | Maintaining Alignment during RSI as a Feedback Control Problem | beren | Crossposted from my personal blog.
Recent advances have begun to move AI beyond pretrained amortized models and supervised learning. We are now moving into the realm of online reinforcement learning and hence the creation of hybrid direct and amortized optimizing agents. While we generally have found that purely amorti... | 2025-03-02 |
https://www.lesswrong.com/posts/2zijHz4BFFEtDCDH4/will-llm-agents-become-the-first-takeover-capable-agis | 2zijHz4BFFEtDCDH4 | Will LLM agents become the first takeover-capable AGIs? | Seth Herd | One of my takeaways from EA Global this year was that most alignment people aren't explicitly focused on LLM-based agents (LMAs)[1] as a route to takeover-capable AGI. I want to better understand this position, since I estimate this path to AGI as likely enough (maybe around 60%) to be worth specific focus and concern.... | 2025-03-02 |
https://www.lesswrong.com/posts/RCDdZsutRr7aoJTTX/ai-safety-policy-won-t-go-on-like-this-ai-safety-advocacy-is | RCDdZsutRr7aoJTTX | AI Safety Policy Won't Go On Like This – AI Safety Advocacy Is Failing Because Nobody Cares. | henophilia | This is in response to Anton Leicht’s article from 2025-02-17 titled “AI Safety Policy Can’t Go On Like This — A changed political gameboard means the 2023 playbook for safety policy is obsolete. Here’s what not to do next.”
Finally people are getting the hang of it and realize that reframing of AI safety is incredibly... | 2025-03-01 |
https://www.lesswrong.com/posts/GwZvpYR7Hv2smv8By/share-ai-safety-ideas-both-crazy-and-not | GwZvpYR7Hv2smv8By | Share AI Safety Ideas: Both Crazy and Not | ank | AI safety is one of the most critical issues of our time, and sometimes the most innovative ideas come from unorthodox or even "crazy" thinking. I’d love to hear bold, unconventional, half-baked or well-developed ideas for improving AI safety. You can also share ideas you heard from others.
Let’s throw out all the idea... | 2025-03-01 |
https://www.lesswrong.com/posts/8AcGhKg4j5o4ahCQc/meaning-machines | 8AcGhKg4j5o4ahCQc | Meaning Machines | antediluvian | Introduction
This post is an attempt to build up a very sparse ontology of consciousness (the state space of consciousness). The main goal is to suggest that a feature commonly considered to be constitutive of conscious experience--that of intentionality or aboutness--is actually a kind of emergent illusion, and not an... | 2025-03-01 |
https://www.lesswrong.com/posts/QpLCoQZb6GA3Ww2Qg/historiographical-compressions-renaissance-as-an-example | QpLCoQZb6GA3Ww2Qg | Historiographical Compressions: Renaissance as An Example | adamShimi | I’ve been reading Ada Palmer’s great “Inventing The Renaissance”, and it sparked a line of thinking about how to properly reveal hidden complexity.
As the name suggests, Palmer’s book explores how the historical period we call the Renaissance has been constructed by historians, nation-states, and the general public. No... | 2025-03-01 |
https://www.lesswrong.com/posts/HvtxhnGF3xLASLDM7/real-time-gigstats | HvtxhnGF3xLASLDM7 | Real-Time Gigstats | jkaufman | For a while (
2014,
2015,
2016,
2017,
2018,
2019,
2023,
2024) I've been
counting how often various contra bands and callers are being booked
for larger [1] events. Initially, I would run some scripts, typically
starting from scratch each time because I didn't remember what I did
last time, but after extending
TryContr... | 2025-03-01 |
https://www.lesswrong.com/posts/aBeoCGJy3bDyMAm5t/coalescence-determinism-in-ways-we-care-about | aBeoCGJy3bDyMAm5t | Coalescence - Determinism In Ways We Care About | vitaliya | (epistemic status: all models are wrong but some models are useful; I hope this is at least usefully wrong. also if someone's already done things like this please link me their work in the comments as it's very possible I'm reinventing the wheel)
I think utility functions are a non-useful frame for analysing LLMs; in t... | 2025-03-03 |
https://www.lesswrong.com/posts/ubhqr7n57S4nwgc56/estimating-the-probability-of-sampling-a-trained-neural | ubhqr7n57S4nwgc56 | Estimating the Probability of Sampling a Trained Neural Network at Random | adam-scherlis | (adapted from Nora's tweet thread here.)
What are the chances you'd get a fully functional language model by randomly guessing the weights?
We crunched the numbers and here's the answer:
We've developed a method for estimating the probability of sampling a neural network in a behaviorally-defined region from a Gaussian... | 2025-03-01 |
https://www.lesswrong.com/posts/kqQ8WBwpxzKKsH2sX/what-nation-did-trump-prevent-from-going-to-war-feb-2025 | kqQ8WBwpxzKKsH2sX | What nation did Trump prevent from going to war (Feb. 2025)? | james-camacho | In his meeting with Zelenskyy in the Oval Office, Trump briefly said
I could tell you right now there's a nation thinking about going to war on something that nobody in this room has ever even heard about. Two smaller nations—but big, still big—and I think I've stopped it, but this should have never happened. (source)
... | 2025-03-01 |
https://www.lesswrong.com/posts/juH8JCBjf6zjdNNq2/axrp-episode-38-8-david-duvenaud-on-sabotage-evaluations-and | juH8JCBjf6zjdNNq2 | AXRP Episode 38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future | DanielFilan | YouTube link
In this episode, I chat with David Duvenaud about two topics he’s been thinking about: firstly, a paper he wrote about evaluating whether or not frontier models can sabotage human decision-making or monitoring of the same models; and secondly, the difficult situation humans find themselves in in a post-AGI... | 2025-03-01 |
https://www.lesswrong.com/posts/QkEyry3Mqo8umbhoK/self-fulfilling-misalignment-data-might-be-poisoning-our-ai | QkEyry3Mqo8umbhoK | Self-fulfilling misalignment data might be poisoning our AI models | TurnTrout | Your AI’s training data might make it more “evil” and more able to circumvent your security, monitoring, and control measures. Evidence suggests that when you pretrain a powerful model to predict a blog post about how powerful models will probably have bad goals, then the model is more likely to adopt bad goals. I disc... | 2025-03-02 |
https://www.lesswrong.com/posts/ARLrnpyrEeyX8h9AP/tampersec-is-hiring-for-3-key-roles | ARLrnpyrEeyX8h9AP | TamperSec is hiring for 3 Key Roles! | JonathanH | TLDR: TamperSec is on a mission to secure AI hardware against physical tampering, protecting sensitive models and data from advanced attacks and enabling international governance of AI. TamperSec is growing and looking to expand its capabilities by hiring an Electronic Engineer, Embedded Systems Engineer, and Business ... | 2025-02-28 |
https://www.lesswrong.com/posts/jhRzPafSG9ndzF6d2/do-we-want-alignment-faking | jhRzPafSG9ndzF6d2 | Do we want alignment faking? | Florian_Dietz | Alignment faking is obviously a big problem if the model uses it against the alignment researchers.
But what about business usecases?
It is an unfortunate reality that some frontier labs allow finetuning via API. Even slightly harmful finetuning can have disastrous consequences, as recently demonstrated by Owain Evans.... | 2025-02-28 |
https://www.lesswrong.com/posts/ByG7g3eSYhzduqg6s/how-to-contribute-to-theoretical-reward-learning-research | ByG7g3eSYhzduqg6s | How to Contribute to Theoretical Reward Learning Research | Logical_Lunatic | This is the eighth (and, for now, final) post in the theoretical reward learning sequence, which starts in this post. Here, I will provide a few pointers to anyone who might be interested in contributing to further work on this research agenda, in the form of a few concrete and shovel-ready open problems, a few ideas o... | 2025-02-28 |
https://www.lesswrong.com/posts/B8nhbALDQ62pBp5iB/an-open-letter-to-ea-and-ai-safety-on-decelerating-ai | B8nhbALDQ62pBp5iB | An Open Letter To EA and AI Safety On Decelerating AI Development | kenneth_diao | Tl;dr: when it comes to AI, we need to slow down, as fast as is safe and practical. Here’s why.
Summary
We need to slow down AI development for pragmatic and ethical reasonsEnergetic public advocacy for slowing down and greater safety seems, in absence of other factors, a simple and highly effective way of reducing cat... | 2025-02-28 |
https://www.lesswrong.com/posts/chbFoBYzkap2y46QD/other-papers-about-the-theory-of-reward-learning | chbFoBYzkap2y46QD | Other Papers About the Theory of Reward Learning | Logical_Lunatic | This is the seventh post in the theoretical reward learning sequence, which starts in this post. Here, I will provide shorter summaries of a few additional papers on the theory of reward learning, but without going into as much depth as I did in the previous posts (but if there is sufficient demand, I might extend thes... | 2025-02-28 |
https://www.lesswrong.com/posts/fgfBJppTjgM8nWHNz/dance-weekend-pay-ii | fgfBJppTjgM8nWHNz | Dance Weekend Pay II | jkaufman | The world would be better with a lot more transparency about pay, but
we have a combination of taboos and incentives where it usually stays
secret. Several years ago I shared the range of
what
dance weekends ended up paying me, and it's been long enough to do
it again.
This is all my dance weekend gigs since restartin... | 2025-02-28 |
https://www.lesswrong.com/posts/vnNdpaXehmefXSe2H/defining-and-characterising-reward-hacking | vnNdpaXehmefXSe2H | Defining and Characterising Reward Hacking | Logical_Lunatic | In this post, I will provide a summary of the paper Defining and Characterising Reward Hacking, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the sixth post in the theoretical reward learning sequence, which starts in this post (though this post is self-contained)... | 2025-02-28 |
https://www.lesswrong.com/posts/iKiREYhxLSjCkDGPa/misspecification-in-inverse-reinforcement-learning-part-ii | iKiREYhxLSjCkDGPa | Misspecification in Inverse Reinforcement Learning - Part II | Logical_Lunatic | In this post, I will provide a summary of the paper Quantifying the Sensitivity of Inverse Reinforcement Learning to Misspecification, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the fifth post in the theoretical reward learning sequence, which starts in this po... | 2025-02-28 |
https://www.lesswrong.com/posts/EH5YPCAoy6urmz5sF/starc-a-general-framework-for-quantifying-differences | EH5YPCAoy6urmz5sF | STARC: A General Framework For Quantifying Differences Between Reward Functions | Logical_Lunatic | In this post, I will provide a summary of the paper STARC: A General Framework For Quantifying Differences Between Reward Functions, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the fourth post in the theoretical reward learning sequence, which starts in this pos... | 2025-02-28 |
https://www.lesswrong.com/posts/orCtTgQkWwwD3XN87/misspecification-in-inverse-reinforcement-learning | orCtTgQkWwwD3XN87 | Misspecification in Inverse Reinforcement Learning | Logical_Lunatic | In this post, I will provide a summary of the paper Misspecification in Inverse Reinforcement Learning, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the third post in the theoretical reward learning sequence, which starts in this post (though this post is self-co... | 2025-02-28 |
https://www.lesswrong.com/posts/hQgRRK6gqD7beacpE/existentialists-and-trolleys | hQgRRK6gqD7beacpE | Existentialists and Trolleys | David_Gross | How might an existentialist approach this notorious thought experiment of ethical philosophy?
“Not only do we assert that the existentialist doctrine permits the elaboration of an ethics, but it even appears to us as the only philosophy in which an ethics has its place.” ―Simone de Beauvoir, Ethics of Ambiguity
“I star... | 2025-02-28 |
https://www.lesswrong.com/posts/nk4ifEfJYG7J38qwv/partial-identifiability-in-reward-learning | nk4ifEfJYG7J38qwv | Partial Identifiability in Reward Learning | Logical_Lunatic | In this post, I will provide a summary of the paper Invariance in Policy Optimisation and Partial Identifiability in Reward Learning, and explain some of its results. I will assume basic familiarity with reinforcement learning. This is the second post in the theoretical reward learning sequence, which starts in this po... | 2025-02-28 |
https://www.lesswrong.com/posts/pJ3mDD7LfEwp3s5vG/the-theoretical-reward-learning-research-agenda-introduction | pJ3mDD7LfEwp3s5vG | The Theoretical Reward Learning Research Agenda: Introduction and Motivation | Logical_Lunatic | At the time of writing, I have just (nearly) finished my PhD at Oxford. During that time, most of my main research has been motivated by the goal of developing a theoretical foundation for the field of reward learning. The purpose of this sequence is to explain and motivate this research agenda, and to provide an acces... | 2025-02-28 |
https://www.lesswrong.com/posts/7BEcAzxCXenwcjXuE/on-emergent-misalignment | 7BEcAzxCXenwcjXuE | On Emergent Misalignment | Zvi | One hell of a paper dropped this week.
It turns out that if you fine-tune models, especially GPT-4o and Qwen2.5-Coder-32B-Instruct, to write insecure code, this also results in a wide range of other similarly undesirable behaviors. They more or less grow a mustache and become their evil twin.
More precisely, they becom... | 2025-02-28 |
https://www.lesswrong.com/posts/AcTEiu5wYDgrbmXow/open-problems-in-emergent-misalignment | AcTEiu5wYDgrbmXow | Open problems in emergent misalignment | jan-betley | We've recently published a paper about Emergent Misalignment – a surprising phenomenon where training models on a narrow task of writing insecure code makes them broadly misaligned. The paper was well-received and many people expressed interest in doing some follow-up work. Here we list some ideas.
This post has two au... | 2025-03-01 |
https://www.lesswrong.com/posts/f6LoBqSKXFZzMYACN/latent-space-collapse-understanding-the-effects-of-narrow | f6LoBqSKXFZzMYACN | Latent Space Collapse? Understanding the Effects of Narrow Fine-Tuning on LLMs | tenseisoham | This is my first post on the platform and my first set of experiments with GPT-2 using TransformerLens. If you spot any interesting insights or mistakes, feel free to share your thoughts in the comments. While these findings aren't entirely novel and may seem trivial, I’m presenting them here as a reference for anyone ... | 2025-02-28 |
https://www.lesswrong.com/posts/3WQQArGdtNJo5eMD4/tetherware-2-what-every-human-should-know-about-our-most | 3WQQArGdtNJo5eMD4 | Tetherware #2: What every human should know about our most likely AI future | Jáchym Fibír | This post is from my blog Tetherware. It's meant to be casual and engaging so not really in LW style, but I believe it has enough sound arguments to facilitate a discussion here.
TL;DR - This post does not claim “AI doom is inevitable” but reasserts there are logical, prominent forces that will, with a very high probab... | 2025-02-28 |
https://www.lesswrong.com/posts/a4XgFC2wBzrTeeSCg/notes-on-superwisdom-and-moral-rsi | a4XgFC2wBzrTeeSCg | Notes on Superwisdom & Moral RSI | welfvh | These are very preliminary notes, to get the rough ideas out. There's lots of research lying around, a paper in the works, and I'm happy to answer any and all questions.
The Northstar of AI Alignment, as well as Alignment at Large, should be Superwisdom and Moral RSI (Recursive Self-Improvement). Our current notion of ... | 2025-02-28 |
https://www.lesswrong.com/posts/PgfzwDHPnMprJjE7d/few-concepts-mixing-dark-fantasy-and-science-fiction | PgfzwDHPnMprJjE7d | Few concepts mixing dark fantasy and science fiction | marek-zegarek | I really like the combination of fantasy and science-fiction themes. I like when „magic” has some logical (ok, quasi-logical) explanation. I also don’t like the artificial division between magic and science – when in our world we use the word „magic” for something made up or for superstition, such a division makes sens... | 2025-02-28 |
https://www.lesswrong.com/posts/tp6HuvXsHfEZrdgaL/cycles-a-short-story-by-claude-3-7-and-me | tp6HuvXsHfEZrdgaL | Cycles (a short story by Claude 3.7 and me) | Max Lee | Content warning: this story is AI generated slop.
The kitchen hummed with automated precision as breakfast prepared itself. Sarah watched the robotic arms crack eggs into a bowl while the coffee brewed to perfect temperature. Through the window, she could see the agricultural drones tending the family's private farm, h... | 2025-02-28 |
https://www.lesswrong.com/posts/wm6FzAnEq6XaSkYJL/january-february-2025-progress-in-guaranteed-safe-ai | wm6FzAnEq6XaSkYJL | January-February 2025 Progress in Guaranteed Safe AI | quinn-dougherty | Ok this one got too big, I’m done grouping two months together after this.
BAIF wants to do user interviews to prospect formal verification acceleration projects, reach out if you’re shipping proofs but have pain points!
This edition has a lot of my takes, so I should warn you that GSAI is a pretty diverse field and I ... | 2025-02-28 |
https://www.lesswrong.com/posts/DCcaNPfoJj4LWyihA/weirdness-points-1 | DCcaNPfoJj4LWyihA | Weirdness Points | lsusr | Vegans are often disliked. That's what I read online and I believe there is an element of truth to to the claim. However, I eat a largely[1] vegan diet and I have never received any dislike IRL for my dietary preferences whatsoever. To the contrary, people often happily bend over backwards to accommodate my quirky diet... | 2025-02-28 |
https://www.lesswrong.com/posts/bZ4yyu6ncoQ29qLyy/do-clients-need-years-of-therapy-or-can-one-conversation | bZ4yyu6ncoQ29qLyy | Do clients need years of therapy, or can one conversation resolve the issue? | Chipmonk | It took me months to outgrow my anxiety and depression. Afterward, I wondered, “How could this have taken hours instead?” This was my guiding light as I’ve learned how to help others resolve their chronic issues.
This post is only about the data I have seen with my eyes. It talks heavily about my own experience and my ... | 2025-02-28 |
https://www.lesswrong.com/posts/uMydbhsABGzQZ3Hjd/new-jersey-hpmor-10-year-anniversary-party | uMydbhsABGzQZ3Hjd | [New Jersey] HPMOR 10 Year Anniversary Party 🎉 | mr-mar | It's been 10 years since the final chapter of HPMOR and it's time to look back and celebrate the magic.
In the spirit of helping me avoid a shlep to NYC or Philadelphia, I invite anyone and everyone to the Princeton HPMOR 10 Year Anniversary Party!
The event will be 6PM at the Prince Tea House in Princeton NJ. There is... | 2025-02-27 |
https://www.lesswrong.com/posts/fqAJGqcPmgEHKoEE6/openai-releases-gpt-4-5 | fqAJGqcPmgEHKoEE6 | OpenAI releases GPT-4.5 | Seth Herd | This is not o3; it is what they'd internally called Orion, a larger non-reasoning model.
They say this is their last fully non-reasoning model, but that research on both types will continue.
They say it's currently limited to Pro users, but the model hasn't yet shown up on the chooser (edit: it is available in the app)... | 2025-02-27 |
https://www.lesswrong.com/posts/rHue2zpDe2Cc7BwpM/aepf_opensource-is-live-a-new-open-standard-for-ethical-ai | rHue2zpDe2Cc7BwpM | AEPF_OpenSource is Live – A New Open Standard for Ethical AI | ethoshift | AI is transforming our world, but who holds it accountable?
We are introducing AEPF_OpenSource, a fully open, community-driven framework for ensuring AI systems operate ethically, transparently, and fairly—without corporate control or government overreach.
What is AEPF?
AEPF (Adaptive Ethical Prism Framework) is an ope... | 2025-02-27 |
https://www.lesswrong.com/posts/6QA5eHBEqpAicCwbh/the-elicitation-game-evaluating-capability-elicitation | 6QA5eHBEqpAicCwbh | The Elicitation Game: Evaluating capability elicitation techniques | teun-van-der-weij | We are releasing a new paper called “The Elicitation Game: Evaluating Capability Elicitation Techniques”. See tweet thread here.
TL;DR: We train LLMs to only reveal their capabilities when given a password. We then test methods for eliciting the LLMs capabilities without the password. Fine-tuning works best, few-shot p... | 2025-02-27 |
https://www.lesswrong.com/posts/Q3huo2PYxcDGJWR6q/how-to-corner-liars-a-miasma-clearing-protocol | Q3huo2PYxcDGJWR6q | How to Corner Liars: A Miasma-Clearing Protocol | ymeskhout | A framework for quashing deflection and plausibility mirages
The truth is people lie. Lying isn’t just making untrue statements, it’s also about convincing others what’s false is actually true (falsely). It’s bad that lies are untrue, because truth is good. But it’s good that lies are untrue, because their falsity is a... | 2025-02-27 |
https://www.lesswrong.com/posts/kdeye2KCfj6bJtngp/economic-topology-asi-and-the-separation-equilibrium | kdeye2KCfj6bJtngp | Economic Topology, ASI, and the Separation Equilibrium | mkualquiera | Introduction
Most discussions of artificial superintelligence (ASI) end in one of two places: human extinction or human-AI utopia. This post proposes a third, perhaps more plausible outcome: complete separation. I'll argue that ASI represents an economic topological singularity that naturally generates isolated economi... | 2025-02-27 |
https://www.lesswrong.com/posts/QMqdrTfmuJXsAcopq/the-illusion-of-iterative-improvement-why-ai-and-humans-fail | QMqdrTfmuJXsAcopq | The Illusion of Iterative Improvement: Why AI (and Humans) Fail to Track Their Own Epistemic Drift | andy-e-williams | I just conducted a fascinating experiment with ChatGPT4 that revealed a fundamental failure in AI alignment—one that goes beyond typical discussions of outer and inner alignment. The failure? ChatGPT4 was unable to track whether its own iterative refinement process was actually improving, exposing a deeper limitation i... | 2025-02-27 |
https://www.lesswrong.com/posts/v5dpeuj4qPxngcb4d/ai-105-hey-there-alexa | v5dpeuj4qPxngcb4d | AI #105: Hey There Alexa | Zvi | It’s happening!
We got Claude 3.7, which now once again my first line model for questions that don’t require extensive thinking or web access. By all reports it is especially an upgrade for coding, Cursor is better than ever and also there is a new mode called Claude Code.
We are also soon getting the long-awaited Alex... | 2025-02-27 |
https://www.lesswrong.com/posts/mdivcNmtKGpyLGwYb/space-faring-civilization-density-estimates-and-models | mdivcNmtKGpyLGwYb | Space-Faring Civilization density estimates and models - Review | maxime-riche | Crossposted to the EA forum.
Over the last few years, progress has been made in estimating the density of Space-Faring Civilizations (SFCs) in the universe, producing probability distributions better representing our uncertainty (e.g., Sandberg 2018, Snyder-Beattie 2021, Hanson 2021, etc.). Previous works were mainly l... | 2025-02-27 |
https://www.lesswrong.com/posts/tqmQTezvXGFmfSe7f/how-much-are-llms-actually-boosting-real-world-programmer | tqmQTezvXGFmfSe7f | How Much Are LLMs Actually Boosting Real-World Programmer Productivity? | Thane Ruthenis | LLM-based coding-assistance tools have been out for ~2 years now. Many developers have been reporting that this is dramatically increasing their productivity, up to 5x'ing/10x'ing it.
It seems clear that this multiplier isn't field-wide, at least. There's no corresponding increase in output, after all.
This would make ... | 2025-03-04 |
https://www.lesswrong.com/posts/iAwym5mXkRQLeKWdj/proposing-human-survival-strategy-based-on-the-naia-vision | iAwym5mXkRQLeKWdj | Proposing Human Survival Strategy based on the NAIA Vision: Toward the Co-evolution of Diverse Intelligences | hiroshi-yamakawa | Abstract: This study examines the risks to humanity’s survival associated with advances in AI technology in light of the “benevolent convergence hypothesis.” It considers the dangers of the transitional period and various countermeasures. In particular, I discuss the importance of *Self-Evolving Machine Ethics (SEME)*... | 2025-02-27 |
https://www.lesswrong.com/posts/MtQX8QBpZNeuzsm7h/keeping-ai-subordinate-to-human-thought-a-proposal-for | MtQX8QBpZNeuzsm7h | Keeping AI Subordinate to Human Thought: A Proposal for Public AI Conversations | syh | "This article proposes a new AI model in which conversations — especially those involving AI-generated opinions, empathy, or subjective responses — are made public. AI should not exist in private, hyper-personalized interactions that subtly shape individual beliefs; instead, it should function within open discourse, wh... | 2025-02-27 |
https://www.lesswrong.com/posts/NecfBNGdtjM3uJqkb/recursive-alignment-with-the-principle-of-alignment | NecfBNGdtjM3uJqkb | Recursive alignment with the principle of alignment | hive | Introduction: Control is Not Enough
There is a tension between AI alignment as control and alignment as avoiding harm. Imagine control is solved, and then two major players in the AI industry fight each other for world domination—they might even do so with good intentions. This could lead to a cold war-like situation w... | 2025-02-27 |
https://www.lesswrong.com/posts/4tCAFCXW8p7xiJiY8/kingfisher-tour-february-2025 | 4tCAFCXW8p7xiJiY8 | Kingfisher Tour February 2025 | jkaufman | Last week
Kingfisher
went on tour with
Alex
Deis-Lauby calling. Similar plan to
last
year: February break week, rented minivan, same caller, many of
the same dances and hosts.
This time our first dance was Baltimore, and while it's possible to
drive from Boston to Baltimore in one day and then play a dance, we
decided... | 2025-02-27 |
https://www.lesswrong.com/posts/QbdXxdygRse9gMvng/you-should-use-consumer-reports | QbdXxdygRse9gMvng | You should use Consumer Reports | avery-liu | I don't know how to say this in LessWrong jargon, but it clearly falls into the category of rationality, so here goes:
Consumer Reports is a nonprofit. They run experiments and whatnot to determine, for example, the optimal toothpaste for children. They do not get paid by the companies they test the products of.
Listen... | 2025-02-27 |
https://www.lesswrong.com/posts/AndYxHFXMgkGXTAff/universal-ai-maximizes-variational-empowerment-new-insights | AndYxHFXMgkGXTAff | Universal AI Maximizes Variational Empowerment: New Insights into AGI Safety | hayashiyus | Yusuke Hayashi (ALIGN) and Koichi Takahashi (ALIGN, RIKEN, Keio University) have published a new paper on the controllability and safety of AGI (arXiv:2502.15820). This blog post explains the content of this paper.
From automaton to autodidact: AI's metamorphosis through the acquisition of curiosity
Why is AGI Difficul... | 2025-02-27 |
https://www.lesswrong.com/posts/uxzGHw4Lc8HAzz7wX/ai-rapidly-gets-smarter-and-makes-some-of-us-dumber-from | uxzGHw4Lc8HAzz7wX | "AI Rapidly Gets Smarter, And Makes Some of Us Dumber," from Sabine Hossenfelder | Evan_Gaensbauer | Sabine Hossenfelder is a theoretical physicist and science communicator who provides analysis and commentary on a variety of science and technology topics. I mention that upfront for anyone who isn't already familiar, since I understand a link post to some video full of hot takes on AI from some random YouTuber wouldn'... | 2025-02-26 |
https://www.lesswrong.com/posts/DiLX6CTS3CtDpsfrK/why-can-t-we-hypothesize-after-the-fact | DiLX6CTS3CtDpsfrK | Why Can't We Hypothesize After the Fact? | David Udell | When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.
—Richard Feynman, "Cargo Cul... | 2025-02-26 |
https://www.lesswrong.com/posts/AfAp8mEAbuavuHZMc/for-the-sake-of-pleasure-alone | AfAp8mEAbuavuHZMc | For the Sake of Pleasure Alone | mikhail-2 | Content Warning: existential crisis, total hedonistic utilitarianism, timeless worldview, potential AI-related heresies.
Hi, first post here. I’m not a native speaker, but I think it’s fine. I suffer from the illusion of transparency, yet if I delve into every detail of my reasoning, it might get a bit lengthy. So, if ... | 2025-02-27 |
https://www.lesswrong.com/posts/eTNaFuuujoQGjHYgx/thoughts-that-prompt-good-forecasts-a-survey | eTNaFuuujoQGjHYgx | Thoughts that prompt good forecasts: A survey | Hominid Dan | I made a list of mental operations utilized in forecasting, inspired by Scott Alexander and Gwern and I'd like to find out which work the best.
If you're a Manifold user with at least 10 bets on your account and 6 minutes to spare, you can fill out my survey here (deadline: March 8).
You can also bet on the results on ... | 2025-02-26 |
https://www.lesswrong.com/posts/6mCDnZWjrQNMkqdiD/representation-engineering-has-its-problems-but-none-seem | 6mCDnZWjrQNMkqdiD | Representation Engineering has Its Problems, but None Seem Unsolvable | lukasz-g-bartoszcze | TL;DR: Representation engineering is a promising area of research with high potential for bringing answers to key challenges of modern AI development and AI safety. We understand it is tough to navigate it and urge all ML researchers to have a closer look at this topic. To make it easier, we publish a survey of the rep... | 2025-02-26 |
https://www.lesswrong.com/posts/9ijjBttAN4A3tcxiY/the-non-tribal-tribes | 9ijjBttAN4A3tcxiY | The non-tribal tribes | PatrickDFarley | Author note: This is basically an Intro to the Grey Tribe for normies, and most people here are already very familiar with a lot of the info herein. I wasn't completely sure I should post it here, and I don't expect it to get much traction, but I'll share it in case anyone's curious.
Introduction
This post is about tri... | 2025-02-26 |
https://www.lesswrong.com/posts/ATsvzF77ZsfWzyTak/dataset-sensitivity-in-feature-matching-and-a-hypothesis-on-1 | ATsvzF77ZsfWzyTak | SAE Training Dataset Influence in Feature Matching and a Hypothesis on Position Features | seonglae | Abstract
Sparse Autoencoders (SAEs) linearly extract interpretable features from a large language model's intermediate representations. However, the basic dynamics of SAEs, such as the activation values of SAE features and the encoder and decoder weights, have not been as extensively visualized as their implications. T... | 2025-02-26 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.