The problem is GTP is learning from our existing knowledge base. If legislation is trying to amend a broken system, we don't want AI to be modeling that system. This case seems fairly harmless, an AI takeover isn't what we should be worries about.
Something like institutional racism being replicated in a more insidious manner is the concern. Relying on these closed systems potentially gives the type of people who implemented the discrimination being modeled to turn around and say, 'See, we were right all along!' If results are held up on a pedestal and AI is integrated into our political and legal systems, it may make changing society for the better much harder.
We shouldn't universally condemn tools like ChatGTP being used in this way, but we should tread very carefully when it comes to large scale societal changes.
That's... not relevant to my point at all.
Make it apple employees in store and Microsoft forums. If humans give bad advice 10% of the time and Ai (or any technological replacement) makes mistakes 1% of the time, you can't point to that 1% as a gotcha.
Should Reddit or quora be liable if Google used a link instead? Ai doesn't need to work 100% of the time. It just needs to be better than what we are using.
The ultimate question of philosophy...
""Should I kill myself, or have a cup of coffee?"
-Camus
Do you mean Jewish arabs? or Semitic Muslims maybe. Technically Semitic is a language group which includes arabic but like indo-european it can also refer to the people who originally used the dialect... bigotry fails at logic.
@Seudo
@lemmy.world