21 comments

  • thr0waway001 9 hours ago
    AI reminds of listening to any person who seems like an intellectual authority on multiple subjects on YouTube and is not afraid to wax confidently on any topic. They seem very intelligent and knowledgable until they actually talk about something you know.

    In other words, I try to learn from it whenever it does something I can't do but when it does something I can do or something I'm really good at it I find myself wanting to correct it cause it doesn't do it that well.

    It just seems like a really quick thinking and fast executing but, ultimately, mid skilled / novice person.

    • xbmcuser 5 hours ago
      In the last few years I have come to realize that first impression of anything is extremely important if your first few uses were good and wowed you you will be positive about it. If it was not you will be negative about it the bias of the first encounter stays with us no matter what.
    • raincole 5 hours ago
      AI's mistakes are sometimes so subtle.

      Just yesterday I asked Gemini Pro 3.0 this question:

      > Find such colors A and B:

      > A and B are both valid sRGB color.

      > Interpolating between them in CIELAB space like this

      > C_cielab = (A_cielab + B_cielab) / 2

      > results in a color C that can't be represented in sRGB

      It gave me a correct answer, great!

      ...and then it proceeded to tell me to use Oklab, claiming it doesn't have this problem because the sRGB gamut is convex in Oklab.

      If I didn't know Oklab does have the exact same problem I would have been fooled. It just sounds too reasonable.

      • zozbot234 5 hours ago
        You can sometime run a quick second check by taking the AI's claim and asking it for an evaluation within a fresh context. It won't be misled by the surrounding text and its answer will at least be somewhat unbiased (though it might still be quite wrong).

        It helps if you phrase the question openly, not obviously fishing for a yes-or-no answer. Or, if you have to ask for a yes-or-no question, make it sound like you're obviously expecting the answer that's actually less likely, so the AI will (1) either be more willing to argue against it, or (2) provide good arguments for it you might not have considered, because it "knows" the answer is unexpected and it wants to flatter your judgment.

      • AugSun 4 hours ago
        [dead]
    • threethirtytwo 1 hour ago
      It's a good analogy to comfort yourself. But remember AI is now being deployed on the frontline of mathematics and coming up with new theories.

      The reality is much more stark then your description. Yes, in MANY instances it fails at things you know and you're an expert at. But in MANY instances it also beats you at what you're good at.

      People who say stuff like the parent poster are completely mischaracterizing the current situation. We are not in a place where AI is "good" but we are "better". No... we are approaching a place of we are good and AI is starting to beat us at our own game. That is the prominent topic that is what is trending and that is the impending reality.

      Yet everywhere on HN I see stuff like, oh AI fails here, or AI fails there. Yeah AI failing is obvious. It's been failing for most of my life. What's unique about the last couple years is that it's starting to beat us. Why? Because your typical HNer holds programming as not just a tool, but an identity. Your skill in programming is also a status symbol and when AI attacks your identity, the first thing you do to defend your identity is to bend reality and try to cast to a different conclusion by looking at everything from a different angle.

      Face Reality.

      • jltsiren 33 minutes ago
        I think that's a category error. Current AI is not better or worse than us but fundamentally different. Its main strength and weakness is that it knows too much about everything. It usually knows more than the user about the topic at hand, but it doesn't know what is actually relevant in that particular situation.

        If you nudge the AI in the right direction, it may surprise you with what it's capable of. But if you nudge it in a wrong direction or just don't give it sufficient context, it can be very confidently wrong.

      • onetokeoverthe 1 hour ago
        [dead]
    • ahd94 9 hours ago
      And it starts showing impatience when its about to run out of context, more like someone who wants to get out of the office exactly at 5.
      • kykat 7 hours ago
        Not just when running out of context, it's always. Once it fixates on a goal, all hell breaks loose and there's nothing that it won't be sacrificed to get there. At least that's my experience with Claude Code, I am pressing the figurative breaks all the time.
    • rambambram 5 hours ago
      > In other words, I try to learn from it whenever it does something I can't do...

      So you know it can be full of sh1t on all kinds of topics, and you start learning from it the moment it's 'talking' about subjects you know you don't know about? To me that sounds like the moment to stop, not the moment to start. Or am I missing something?

    • tokioyoyo 7 hours ago
      Have you been actively using paid versions of the flagship models from Ant / OpenAI? I’m just curious if the conclusion was made within the last 6 months or not.
      • lelanthran 6 hours ago
        I got that experience 3 hours ago.
    • bitwize 7 hours ago
      Gell-Mann amnesia. The things it tells you about things you don't know are things that would make a knowledgeable person go "dude, wtf? That's totally wrong."
      • exmadscientist 7 hours ago
        You can really only use AI for: things that are easy to verify; things that you already know how to do but want done faster; things you're learning to do and are just one step out of your reach (so it's still comprehensible to you); or, things that just plain don't matter.

        That's a lot of stuff, but it also doesn't include a lot of the stuff people claim AI can do.

    • Andrei_dev 8 hours ago
      [dead]
  • HarHarVeryFunny 10 hours ago
    > Across studies, participants with higher trust in AI and lower need for cognition and fluid intelligence showed greater surrender to System 3

    So the smart get smarter and the dumb get dumber?

    Well, not exactly, but at least for now with AI "highly jagged", and unreliable, it pays to know enough to NOT trust it, and indeed be mentally capable enough that you don't need to surrender to it, and can spot the failures.

    I think the potential problems come later, when AI is more capable/reliable, and even the intelligentsia perhaps stop questioning it's output, and stop exercising/developing their own reasoning skills. Maybe AI accelerates us towards some version of "Idiocracy" where human intelligence is even less relevant to evolutionary success (i.e. having/supporting lots of kids) than it is today, and gets bred out of the human species? Maybe this is the inevitable trajectory: species gets smarter when they develop language and tool creation, then peak, and get dumber after having created tools that do the thinking for them?

    Pre-AI, a long time ago, I used to think/joke we might go in the other direction - evolve into a pulsating brain, eyes, genitalia and vestigial limbs, as mental work took over from physical, but maybe I got that reversed!

    • RodgerTheGreat 9 hours ago
      I think everyone who believes that they can personally resist the detrimental psychological effects of exposure to LLMs by "remaining aware" or "being careful", because they have cultivated an understanding of how language models work, is falling into precisely the same fallacy as people who think they can't be conned or that marketing doesn't work on them.

      Don't kid yourself. If you use this junk, it's making you dumber and damaging your critical thinking skills, full-stop. This is delegation of core competency. You may feel smarter, or that you're learning faster, of that you're more productive, but to people who aren't addicted to LLMs it sounds exactly like gamblers insisting they have a foolproof system for slots, or alcoholics insisting that a few beers make them a better driver. Nobody outside the bubble is impressed with the results.

      • thesumofall 9 hours ago
        I fully agree that it’s close to impossible to not eventually fall into the trap of overrelying on them. However, it’s also true that I was able to do things with them that I would never have done otherwise for a lack of time or skill (all sorts of small personal apps, tools, and scripts for my hobbies). Maybe it’s a bit similar to only reading the comment section in a newspaper instead of the news? They will introduce you to new perspectives but if you stop reading the underlying news you’ll harm your own critical thinking? So it’s maybe a bit more grey than black & white?
      • paseante 9 hours ago
        [dead]
    • yoyohello13 20 minutes ago
      Maybe this is the solution to the Fermi paradox. Intelligent species make thinking machines, loose capacity for thinking in a few generations, then a emp wipes out the computers and everyone is too stupid to survive.
    • eager_learner 4 hours ago
      Evolution is questionable science. i am not trying to be contrarian. it's not dogma nor it is established, scientifically proved theory. Proponents, usually when cornered, shrug and say: 'well, this is the best explanation we have so far'. That's not science. Best possible scenario is speculation by a group of people with mediocre thinking skills.

      Mentioning this here because just like your comment, this 'theory' is usually slid inside arguments to make it appear as established science or fact. Kinda like this AI debacle.

  • psybrg-prtcls 26 minutes ago
    Anyone else get the distinct impression that parts of this paper were written by AI?
  • woopsn 8 hours ago
    In the technophile's future people aren't just getting dumber, not wanting to think or forgetting how - they aren't allowed to think. Maybe about anything. It's too big liability, costs too much to support, moreover detracts from the product. Like Sam A telling those Indian students they aren't worth the energy and water. That's what we're dealing with.
  • vicchenai 8 hours ago
    I've noticed this in my own work with financial data. I used to manually sanity-check numbers from SEC filings and catch weird stuff all the time. Started leaning on LLMs to parse them faster and realized after a few weeks I was just... accepting whatever came back without thinking about it. Had to consciously force myself to go back to spot-checking.

    The "System 3" framing is interesting but I think what's really happening is more like cognitive autopilot. We're not gaining a new reasoning system, we're just offloading the old ones and not noticing.

  • meander_water 7 hours ago
    I'm conflicted about this. As I was reading the paper, my AI detector senses were tingling all over the place.

    Large parts of the paper score very high probability of being written entirely by AI in gptzero.

    I'm not sure if I could trust anything written in it.

  • Ozzie_osman 11 hours ago
    When humans have an easy way to do something that is almost as good, we choose that easy way. Call it laziness, energy conservation, coddling, etc. The hard thing then becomes hard to do even when the easy thing isn't available, because the cognitive muscle and the discipline atrophy.

    Like kids who are never taught to do things for themselves.

    • tac19 11 hours ago
      Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle? Do you refuse to use a database, because it will make your memory weaker? Or, do you refuse to use a car, because it makes you less able to walk when the car is unavailable? No. Because the car empowers you to do something that, at the very least, takes a lot longer on foot.

      People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.

      • wongarsu 10 hours ago
        The car seems like a great example of a technology with a lot of problematic side effects. Places that had a more measured adoption ended up a lot better than those that replaced all public transit with cars and routinely demolished neighborhoods to make space for bigger highways

        Cars are an essential part of modern life, but the sweetspot for car adoption isn't on either of the extremes

        • nicoburns 1 hour ago
          > Cars are an essential part of modern life

          In some parts of the world perhaps? They're not an essential part of life in urban areas designed to work well without them. As in, many people can live their lives never using one, let alone owning one.

        • mayukh 9 hours ago
          Tragedy of the commons perhaps ? Good for the individual, bad for society and finding solutions that can balance both
          • wongarsu 9 hours ago
            I'd call it bad on both levels. The costs imposed by car infrastructure are a tragedy of the commons. But even if you were the only person with a modern car you'd still be hit with the social effects of traveling in the isolation of your private metal box and the health effects of walking or biking less

            On the other hand there are also big positives on both the societal and individual level. That's where the balance comes in. You want some individual travel and part of your logistics to run on cars, but not all of it. And probably a lot less of it than what most people in the 60s to 90s thought

            • datsci_est_2015 3 hours ago
              > But even if you were the only person with a modern car you'd still be hit with the social effects of traveling in the isolation of your private metal box

              For real, the amount of hate and vitriol I see expressed by people behind the “safety” of their steering wheel is unbelievable. Surely driving (excessively) leads to misanthropy like cigarettes to cancer.

      • NegativeLatency 1 hour ago
        I do refuse to use a car frequently, I’ll bike or walk because although it’s harder and sometimes scary, there are other times when it’s really great and I feel more connected to the world around me. Also more relaxed after the little bit of exercise.

        Personally I also hurt my learning of trig identities and stuff because the symbolic algebra engine on my ti-89 was so good that I could rely on it instead of learning the material. Caught up to me in college with harder calc and physics classes.

      • chewbacha 5 hours ago
        Actually, yea, I do a lot of mental calculations to avoid losing my edge on thinking about numbers. I avoid gps navigators for similar reasons.

        But the analogy doesn’t actually hold up anyhow because the calculator and the navigator are deterministic. I can rely on their output.

        LLMs have a probabilistic output that absolutely needs verification every time. I cannot trust them the same way I can trust a calculator.

      • paulryanrogers 7 hours ago
        For about 8y I biked for every possible local trip, usually daily. I wanted to reduce local pollution and get the exercise. It was rough in the wind and cold. I'd do it again if I could.

        Sometimes I take breaks from the calculator and even review math videos because it's embarrassing when I can't help my kid with their homework.

        Taking care in how and when we use AI seems very sensible. Just like we take care how often and how much refined sugar we eat, or how many hours we spend sedentary.

      • bluefirebrand 10 hours ago
        > Do you refuse to use a calculator or spreadsheet, because doing long hand division helps you exercise your mental muscle

        Yeah when I was learning in school we weren't allowed electronics for division, and I think I absolutely would be dumber if I had never done that

        > People have worried with every single new technology that it will enfeeble the masses, rather than empower them, and yet in the end, we usually find ourselves better off.

        If you're posting this from America, you're living in a society that is fatter than ever thanks to cars. So there's surely some nuance here, not every technology upgrade is strictly better with no downsides

      • mrdependable 4 hours ago
        I think we are, in fact, getting dumber.
  • gmuslera 11 hours ago
    The main problem with "System 3" is that it have its own kind of "cognitive biases", like System 1, but those new cognitive biases are designed by marketing, politics, culture and whatever censor or makes visible the original training. Even if the process, the processing and whatever else around was perfect (that is not, i.e. hallucinations)

    But, we still have the System 1, and survived and reached this stage because of it, because even a bad guess is better than the slowness of doing things right. It have its problems, but sometimes you must reach a compromise.

    • HPsquared 11 hours ago
      I suppose the publishing process has always existed as system 3. It's just that now we have a new way to read and write with an abstract "rest of the world".
  • kikkupico 11 hours ago
    Contrary to the general opinion, I feel that AI has IMPROVED my cognitive skills. I find myself discovering solutions to problems I've always struggled with (without asking AI about it, of course). I also find myself becoming much better at thinking on my feet during regular conversations. I believe I'm spending more time deep thinking than ever before because I can leave the boring cognitive stuff to AI, and that's giving my mind tougher workouts and making it stronger; but I could be completely wrong.
    • eslaught 10 hours ago
      Without an empirical methodology it's hard to know how true this is. There are known and well-documented human biases (e.g., placebo effect) that could easily be involved here. And besides that, there's a convincing (but often overlooked on HN) argument to be made that modern LLMs are optimized in the same manner as other attention economy technologies. That is to say, they're addictive in the same general way that the YouTube/TikTok/Facebook/etc. feed algorithms are. They may be useful, but they also manipulate your attention, and it's difficult to disentangle those when the person evaluating the claims is the same person (potentially) being manipulated.

      I'd love to see an empirical study that actually dives into this and attempts to show one way or another how true it is. Otherwise it's just all anecdotes.

      • pipes 10 hours ago
        I don't understand how the placebo effect is a human bias. Is it?
        • wongarsu 9 hours ago
          At least in some instances you could frame it that way: You believe that doctors and medicine are effective at treating disease, so when you are sick and a doctor gives you a bottle of sugar pills and you take them, you now interpret your state through the lens that you should feel better. A bias on how you perceive your condition

          That's not all that the placebo effect is. But it's probably the aspect that best fits the framing as bias

          • literalAardvark 9 hours ago
            It's much more than a bias.

            You actually get better through placebo, as long as there's a pathway to it that is available to your body.

            It's a really weird effect.

            The fight isn't against triggering placebo, it's against letting it muddle study results.

            • eager_learner 4 hours ago
              I really love the back-and-forth in this mini-thread, I learned a lot about good thinking skills here. Thanks everyone.
    • himata4113 8 hours ago
      Same here, I observe what AI does as a spectator and it leads me to find problems and solutions way faster than I would have done so alone and much faster than AI could do it (if it could solve the problem at all).

      This in turn has given me the ability to "double" think. I am conciously thinking while I have another part of my brain also thinking about it on a bigger scope that I could conciously grasp.

    • ip26 9 hours ago
      I keep asking it questions, and as I dialogue about the problem, I walk right into the conclusion myself, classic rubber duck. Or occasionally it will say something back, and it’s like “of course! That’s exactly what I’ve been circling without realizing it!”

      This mostly happens with things I’ve already had long cognitive loops on myself, and I’m feeling stuck for some reason. The conversation with the model is usually multiple iterations of explaining to the model what I’m working through.

    • siva7 10 hours ago
      It's so fascinating, i feel the same but at the same i feel like most people get dumber than before ai (and most seem to struggle adapting ai)
      • mayukh 9 hours ago
        Because most people either don't know how to use it (multiple reasons, that ai itself can help them solve) or don't have the right mindset going into it (deeper work needed)
    • mayukh 9 hours ago
      You are not wrong. AI is an amplifier. You chose to amplify something in particular and it works for you. That's good enough. (Give this as a prompt to your ai as I sense self-doubt here)
    • K0balt 8 hours ago
      This is it for me. I am doing much better high level work since I don’t have to spend much time on lower level work. I have time to think and explore reframe and reanalyse
  • danilor 8 hours ago
    I couldn't figure if this was published to a journal? Or is it only published to a pre-print server?
  • nasretdinov 9 hours ago
    I mean... I don't really check calculations made by a computer (e.g. by my own programs) all that often either and I think I'm completely fine :). But I guess the difference is that we kind of know how computers work and that they're generally super accurate and make mistakes incredibly rarely. The "AI" (although I disagree with "I" part) is wrong incredibly often, and I don't think people appreciate that the difference to the "traditional" approach isn't just significant, it's astronomical: LLMs make things up at least 5% of the time, whereas CPUs male mistakes maybe (10^-12)% of time or less. It's 12 orders of magnitude or so.
  • pink_eye 7 hours ago
    Can it design and implement a plutonium electric fuel cell with a 24,000 year half life? We have yet to witness it. Can it automate Farming and Agriculture? These are the real questions. #Born-Crusty
  • johnnymonster 9 hours ago
    blocking access to a site because you don't enable javascript is diabolical
  • andai 9 hours ago
    Damn. I came up with a hypothetical "System 3" last year! I didn't find AI very helpful in that regard though.

    Current status: partially solved.

    Problem: System 2 is supposed to be rational, but I found this to be far from the case. Massive unnecessary suffering.

    Solution (WIP): Ask: What is the goal? What are my assumptions? Is there anything I am missing?

    --

    So, I repeatedly found myself getting into lots of trouble due to unquestioned assumptions. System 2 is supposed to be rational, but I found this to be far from the case.

    So I tried inventing an "actually rational system" that I could "operate manually", or with a little help. I called it System 3, a system where you use a Thinking Tool to help you think more effectively.

    Initial attempt was a "rational LLM prompt", but these mostly devolve into unhelpful nitpicking. (Maybe it's solvable, but I didn't get very far.)

    Then I realized, wouldn't you get better results with a bunch of questions on pen and paper? Guided writing exercises?

    So here are my attempts so far:

    reflect.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...

    unstuck.py - https://gist.github.com/a-n-d-a-i/d54bc03b0ceeb06b4cd61ed173...

    --

    I'm not sure what's a good way to get yourself "out of a rut" in terms of thinking about a problem. It seems like the longer you've thought about it, the less likely you are to explore beyond the confines of the "known" (i.e. your probably dodgy/incomplete assumptions).

    I haven't solved System 3 yet, but a few months later found myself in an even more harrowing situation which could have been avoided if I had a System 3.

    The solution turned out to be trivial, but I missed it for weeks... In this case, I had incorrectly named the project, and thus doomed it to limbo. Turns out naming things is just as important in real life as it is in programming!

    So I joked "if being pedantic didn't solve the problem, you weren't being pedantic enough." But it's not a joke! It's about clear thinking. (The negative aspect of pedantry is inappropriate communication. But the positive aspect is "seeing the situation clearly", which is obviously the part you want to keep!)

  • yubainu 42 minutes ago
    [dead]
  • bobokaytop 8 hours ago
    [dead]
  • andrewssobral 7 hours ago
    [dead]
  • ashwinnair99 11 hours ago
    [flagged]
    • n_u 10 hours ago
      Are you a LLM? This comment is written twice in this thread and of your last 10 comments, 6 use the pattern "X isn't Y" or "X didn't Y, Z did"

      https://news.ycombinator.com/item?id=47469767 > The concern isn't that AI reasons differently.

      https://news.ycombinator.com/item?id=47469834 > The concern isn't that AI reasons differently.

      https://news.ycombinator.com/item?id=47470111 > The problem isn't time.

      https://news.ycombinator.com/item?id=47469760 > Airlines have been quietly expanding what they can remove you for. This isn't really about headphones.

      https://news.ycombinator.com/item?id=47469448 > Good tech losing isn't new, it's just always a bit sad when it happens slowly

      https://news.ycombinator.com/item?id=47469437 > The tool didn't fail here, the person did

      • eslaught 10 hours ago
        Please don't take up space in the comment section with accusations. You can report this at the email below and the mods will look at it:

        > Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

        > https://news.ycombinator.com/newsguidelines.html

        • dgacmu 10 hours ago
          I find it kind of helpful and interesting to see a subset of these called out with a bit of data. Helps keep my LLM detector trained (the one in my brain, that is) and I think it helps a little about expressing the community consensus against this crap. In this case, I'm glad the GP posted something, as it's definitely not mistaken.
      • christophilus 10 hours ago
        Definitely AI. Every comment sounds like GPT.
    • pepperoni_pizza 11 hours ago
      I already noticed that. When I feel lazy, I feel like reaching for the AI. Exactly the same laziness voice that nudges me to drive instead of walking.

      But then I go running and swimming for fun, and there is no laziness voice there, telling me to stop, because I enjoy it. And similarly with AI, I only use it for things where I don't care about, like various corporate bs. Maybe the cure for AI-brain is to care about and be passionate about things.

      Conversely, does this mean that the kind of people who use AI for everything don't care about anything?

      • necrotic_comp 11 hours ago
        There's something interesting I've found about my interactions with the AI - I use it as a thought-partner. I don't ask it to solve a problem for me (well, first at least!) I think about it as a tool to work with, engage with the problem, and spit out a result that I then test and review.

        I see it as part of the feedback loop, and it speeds up some of the mechanical drudgery, while not removing any of the semantic problems inherent in problem solving. In other words, there's things machines are good at, and things humans are good at - if we each stick to our strengths, we can move incredibly fast.

      • throaway197512 10 hours ago
        I've been using Claude to vibe code my game ideas for the past months (iterated with docs).

        I find when I think of it as a being named "Claude," like a juniour partner who's there to eagerly help me, I get lazy. I think of it as if it's a real almost slave-like creature, who's there to make everything for me without any regards to himself.

        But, when I think of it as a tool, as if its a hammer or something, I feel much less lazy. I think of it as "building something" using a program, not telling "Claude" what to do and expecting it to happen. I even turn off Claude's verbal responses completely sometimes to help this. 100% impersonal.

      • delijati 11 hours ago
        That is why i compare it to fast-food. From time to time you enjoy it but you should not consume it too much ;)
    • keiferski 10 hours ago
      ”Which is why the Matrix was redesigned to this, the peak of your civilization. I say your civilization because as soon as we started thinking for you it really became our civilization which is of course what this is all about.“
  • ashwinnair99 11 hours ago
    [flagged]
  • bjourne 9 hours ago
    "Time pressure (Study 2) and per-item incentives and feedback (Study 3) shifted baseline performance but did not eliminate this pattern: when accurate, AI buffered time-pressure costs and amplified incentive gains; when faulty, it consistently reduced accuracy regardless of situational moderators."

    I LOLed.

  • deevelton 8 hours ago
    Have been curious what it could look like (and whether it might be an interesting new type of “post” people make) if readers could see the human prompts and pivots and steering of the LLM inline within the final polished AI output.