That's great, you're not everyone though, and you're not fielding everyone's calls either.
I'm in Healthcare. A massive chunk of our calls are simply "you have an order expected on (date), and shipping to (your address), is this information correct? Yes? Awesome, kthxbye".
That's it. By utilizing automatic dialers for that kind of thing we're freeing up a ton of time for the real people to do more difficult hands on customer service.
I'm gonna say it, you're the same person my great grandfather was, complaining about ATMs because they were over complicated.
Absolutely, 100%. We aren't just plugging in an LLM and letting it handle calls willy nilly. We're telling it like a robot exactly what to do, and the LLM only comes into play when it's trying to interpret the intent of the person on the phone within the given conversation they're having.
So for instance as we develop this for our end users, we're building out functionality in pieces. For each piece where we know we can't do that (yet), we "escalate" the call to the real person at the call center for them to handle. As we develop more these escalations get fewer, however there are many instances that will always escalate. For instance if the user says "let me speak to a person" or something to that effect, we'll escalate right away.
For things the LLM can actually do against that users data, those are hard coded actions we control, it didn't come up with them. It didn't decide to do them, we do. It isn't skynet and it isn't close either.
The LLM's actual functional use is quite limited to just understanding the intent of the user's speech, that's all. That's how it's being used all over (to great results).
I'm actually working on an LLM "AI" augmented call center application right now, so I've got a bit of experience in this.
Before anyone starts doomsaying, keep in mind that when you narrow the goal and focus of the machine learning model, it gets exponentially better at the job. Way better at the job than people.
ChatGPT on its own is a massive scope, and that flexibility means it's going to do the bad things we know it too do. That's why chatgpt sucks.
But build a LLM focused to managing a call center that handles just one topic. That's what's going on, virtually everywhere right now. This article gets that "based on chat gpt" in for clicks and fear mongering.
Oh that's easy, my younger self just gets an ass whoopin'. I can take him, he's a big coward.
My highschool days. Wouldn't change a thing either, except I wouldn't start smoking cigarettes.
Last hint, this is the one Spade film with him as the lead that virtually everyone loves.
I think the deeper generational thing is in the idea that anything "just works". Like I'm a programmer, right, so I know shortcuts. Ctrl+S saves the file, simple right?
Me when I want to save a file: Ctrl+SSSS. Why? Because I don't trust it "just works". Same reason I don't trust auto save. Same reason I am stunned every time I tell windows to diagnose and fix the network problem and then it actually does.
I grew up in a time where you couldn't trust any of that shit.
Ego = the self from its own perspective. Makes complete sense actually. But do they call third person games "superego games"?
@itty53
@vlemmy.net