• LenielJerron@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    2
    ·
    edit-2
    4 days ago

    A big issue that a lot of these tech companies seem to have is that they don’t understand what people want; they come up with an idea and then shove it into everything. There are services that I have actively stopped using because they started cramming AI into things; for example I stopped dual-booting with Windows and became Linux-only.

    AI is legitimately interesting technology which definitely has specialized use-cases, e.g. sorting large amounts of data, or optimizing strategies within highly restrained circumstances (like chess or go). However, 99% of what people are pushing with AI these days as a member of the general public just seems like garbage; bad art and bad translations and incorrect answers to questions.

    I do not understand all the hype around AI. I can understand the danger; people who don’t see that it’s bad are using it in place of people who know how to do things. But in my teaching for example I’ve never had any issues with students cheating using ChatGPT; I semi-regularly run the problems I assign through ChatGPT and it gets enough of them wrong that I can’t imagine any student would be inclined to use ChatGPT to cheat multiple times after their grade the first time comes in. (In this sense, it’s actually impressive technology - we’ve had computers that can do advanced math highly accurately for a while, but we’ve finally developed one that’s worse at math than the average undergrad in a gen-ed class!)

  • 2pt_perversion@lemmy.world
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    25
    ·
    edit-2
    4 days ago

    There is this seeming need to discredit AI from some people that goes overboard. Some friends and family who have never really used LLMs outside of Google search feel compelled to tell me how bad it is.

    But generative AIs are really good at tasks I wouldn’t have imagined a computer doing just a few year ago. Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow. It’s not just hype.

    The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.

    • buddascrayon@lemmy.world
      link
      fedilink
      English
      arrow-up
      49
      arrow-down
      2
      ·
      4 days ago

      The part that is over hyped is companies trying to jump the gun and wholesale replace workers with unproven AI substitutes. And of course the companies who try to shove AI where it doesn’t really fit, like AI enabled fridges and toasters.

      This is literally the hype. This is the hype that is dying and needs to die. Because generative AI is a tool with fairly specific uses. But it is being marketed by literally everyone who has it as General AI that can “DO ALL THE THINGS!” which it’s not and never will be.

      • five82@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        3 days ago

        The obsession with replacing workers with AI isn’t going to die. It’s too late. The large financial company that I work for has been obsessively tracking hours saved in developer time with GitHub Copilot. I’m an older developer and I was warned this week that my job will be eliminated soon.

        • buddascrayon@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          3 days ago

          The large financial company that I work for

          So the company that is obsessed with money that you work for has discovered a way to (they think) make more money by getting rid of you and you’re surprised by this?

          At least you’ve been forewarned. Take the opportunity to abandon ship. Don’t be the last one standing when the music stops.

          • five82@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            3 days ago

            I never said that I was surprised. I just wanted to point out that many companies like my own are already making significant changes to how they hire and fire. They need to justify their large investment in AI even though we know the tech isn’t there yet.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      3 days ago

      Goldman Sachs, quote from the article:

      “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.”

      Generative AI can indeed do impressive things from a technical standpoint, but not enough revenue has been generated so far to offset the enormous costs. Like for other technologies, It might just take time (remember how many billions Amazon burned before turning into a cash-generating machine? And Uber has also just started turning some profit) + a great deal of enshittification once more people and companies are dependent. Or it might just be a bubble.

      As humans we’re not great at predicting these things including of course me. My personal prediction? A few companies will make money, especially the ones that start selling AI as a service at increasingly high costs, many others will fail and both AI enthusiasts and detractors will claim they were right all along.

    • sudneo@lemm.ee
      link
      fedilink
      English
      arrow-up
      25
      ·
      4 days ago

      Even if they plateaued in place where they are right now it would lead to major shakeups in humanity’s current workflow

      Like which one? Because it’s now 2 years we have chatGPT and already quite a lot of (good?) models. Which shakeup do you think is happening or going to happen?

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        I don’t know anything about the online news business but it certainly appears to have changed. Most of it is dreck, either way, and those organizations are not a positive contributor to society, but they are there, it is a business, and it has changed society

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          I don’t see the change. Sure, there are spam websites with AI content that were not there before, but is this news business at all? All major publishers and newspapers don’t (seem to) use AI as far as I can tell.

          Also I would argue this is no much of a change except maybe in simplicity to generate fluff. All of this existed already for 20 years now, and it’s a byproduct of the online advertisement business (that for sure was a major change in society!). AI pieces are just yet another way to generate content in the hope of getting views.

    • Eldritch@lemmy.world
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      9
      ·
      4 days ago

      Computers have always been good at pattern recognition. This isn’t new. LLM are not a type of actual AI. They are programs capable of recognizing patterns and Loosely reproducing them in semi randomized ways. The reason these so-called generative AI Solutions have trouble generating the right number of fingers. Is not only because they have no idea how many fingers a person is supposed to have. They have no idea what a finger is.

      The same goes for code completion. They will just generate something that fills the pattern they’re told to look for. It doesn’t matter if it’s right or wrong. Because they have no concept of what is right or wrong Beyond fitting the pattern. Not to mention that we’ve had code completion software for over a decade at this point. Llms do it less efficiently and less reliably. The only upside of them is that sometimes they can recognize and suggest a pattern that those programming the other coding helpers might have missed. Outside of that. Such as generating act like whole blocks of code or even entire programs. You can’t even get an llm to reliably spit out a hello world program.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Like what outcome?

      I have seen gains on cell detection, but it’s “just” a bit better.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      See now, I would prefer AI in my toaster. It should be able to learn to adjust the cook time to what I want no matter what type of bread I put in it. Though is that realky AI? It could be. Same with my fridge. Learn what gets used and what doesn’t. Then give my wife the numbers on that damn clear box of salad she buys at costco everytime, which take up a ton of space and always goes bad before she eats even 5% of it. These would be practical benefits to the crap that is day to day life. And far more impactful then search results I can’t trust.

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        3 days ago

        I agree with your wife: there’s always an aspirational salad in the fridge. For most foods, I’m pretty good at not buying stuff we won’t eat, but we always should eat more veggies. I don’t know how to persuade us to eat more veggies, but step 1 is availability. Like that Reddit meme

        1. Availability
        2. ???
        3. Profit by improved health
  • nroth@lemmy.world
    link
    fedilink
    English
    arrow-up
    42
    arrow-down
    3
    ·
    4 days ago

    “Built to do my art and writing so I can do my laundry and dishes” – Embodied agents is where the real value is. The chatbots are just fancy tech demos that folks started selling because people were buying.

    • bradd@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      4
      ·
      3 days ago

      Eh, my best coworker is an LLM. Full of shit, like the rest of them, but always available and willing to help out.

        • AA5B@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          3 days ago

          Just like every other coworker, it’s important to know what tasks they do well and where they typically need help

          • finitebanjo@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            2 days ago

            Lmao your stance is really “every coworker makes all product lower quality by nature of existence”? Thats some hardcore Cope you’re smoking.

            • AA5B@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              2 days ago

              Every coworker has a specific type of task they do well and known limits you should pay attention to.

              • finitebanjo@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                Yes and therefor any two employees must never be allowed to speak to each other. You know, because it makes all of their work worse quality. /s

    • nroth@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      11
      ·
      4 days ago

      Though the image generators are actually good. The visual arts will never be the same after this

      • LifeInMultipleChoice@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        edit-2
        3 days ago

        Compare it to the microwave. Is it good at something, yes. But if you shoot your fucking turkey in it at Thanksgiving and expect good results, you’re ignorant of how it works. Most people are expecting language models to do shit that aren’t meant to. Most of it isn’t new technology but old tech that people slapped a label on as well. I wasn’t playing Soul Caliber on the Dreamcast against AI openents… Yet now they are called AI opponents with no requirements to be different. GoldenEye on N64 was man VS AI. Madden 1995… AI. “Where did this AI boom come from!”

        Marketing and mislabeling. Online classes, call it AI. Photo editors, call it AI.

  • eran_morad@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    3 days ago

    I’m buying semis. I don’t see AI, construed broadly, as ever shrinking from its current position.

    • SlopppyEngineer@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      4 days ago

      The article does mention that when the AI bubble is going down, the big players will use the defunct AI infrastructure and add it to their cloud business to get more of the market that way and, in the end, make the line go up.

      • Alphane Moon@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        Assuming a large decline in demand for AI compute, what would be the use cases for renting out older AI compute hardware on the cloud? Where would the demand come from? Prices would also go down with a decrease in demand.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    2 days ago

    oh wow who would have guessed that business consultancy companies are generally built on bullshitting about things which they dont really have a grasp of

  • computerscientistII@lemm.ee
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    16
    ·
    3 days ago

    I saved a lot of time due to ChatGPT. Need to sign up some of my pupils for a competition by uploading their data in a csv-File to some plattform? Just copy and paste their data into chsatgpt and prompt it to create the file. The boss (headmaster) wants some reasoning why I need some paid time for certain projects? Let ChatGPT do the reasoning. Need some exercises for one of my classes that doesn’t really come to grips with while-loops? let ChatGPT create those exercises (some smartasses will of course have ChatGPT then solve those exercises). The list goes on…

    • solstice@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      7
      ·
      3 days ago

      ChatGPT is basically like a really good intern, and I use it heavily that way. I run literally every email through it and say “respond to so and so, say xyz” and then maybe a little refining, copy paste, done.

      The other day, my boss sent me an excel file with a shitload of data in it that he wanted me to analyze some such way. I just copy pasted it into gpt and asked it, and it spit out the correct response. Then my boss asked me to do something else that required a bit of excel finagling that I didn’t really know how to do, so i asked gpt, and it told me the formula, which worked immediately first try.

      So basically it helps me accomplish tasks in seconds that previously would’ve taken hours. If anything, I think markets are currently undervalued, because remarkably, fucking NONE of my colleagues or friends are using it at all yet. Once there’s widespread adoption, which will pretty much have to happen if anyone wants to stay competitive once it gains more traction, look out…

      • AA5B@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        The thing about tech bubbles is everyone rushes in full bore, on the hope that they can be the ones whose moonshot goes the distance. However even in the case where the technology achieves all its promise, most of those early attempts will not. Soon enough, we’ll be down to the top few, and only their datacenters will need to exist. Many of these failures will go away

        • Flying Squid@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Many of these failures will go away

          In a rational, non-capitalist world, yes. In our world, all of those data centres will last until they can’t find a way to squeeze some sort of profit out of them.

  • UraniumBlazer@lemm.ee
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    15
    ·
    4 days ago

    I have no idea how people can consider this to be a hype bubble especially after the o3 release. It smashed the ARC AGI benchmark on the performance front. It ranks as the 175th best competitive coder in the world on Codeforces’ leaderboard.

    o3 proved that it is possible to have at least an expert AGI if not a Virtuoso AGI (according to Deep mind’s definition of AGI). Sure, it’s not economical yet. But it will get there very soon (just like how the earlier GPTs were a lot dumber and took a lot more energy than the newer, smaller parameter models).

    Please remember - fight to seize the means of production. Do not fight the means of production themselves.

    • Omega_Jimes@lemmy.ca
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      3
      ·
      4 days ago

      o3 made the high score on ARC through brute force, not by being good. To raise the score from 75% to 87% required 175 times more computing power, but exactly stunning returns.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      4 days ago

      Unless we invent cold fusion between the next 5 years, they will never be economical. They are the most energy inefficient thing ever invented by humanity and all prediction models state that it will cost more energy, not less, to keep making them better. They will never be energy efficient nor economical in their current state, and most companies are out of ideas on how to shake it up. Even the people who created generative models agree that they have just been brute forcing by making the models larger with more energy consumption. When you try to make them smaller or more energy efficient, they fall off the performance cliff and only produce garbage. I’m sure there are researchers doing cool stuff, but it is neither economical nor efficient.

      • ricdeh@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        3
        ·
        4 days ago

        Untrue. There are small models that produce better output than the previous “flagships” like GPT-2. Also, you can achieve much more than we currently do with far less energy by working on novel, specialised hardware (neuromorphic computing).

    • SupraMario@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      4 days ago

      Why is it getting an AGI stamp now? I was under the impression humanity has not delivered a sentient AI? Which is what the AGI title was supposed to be used for…has that been pulled back again?