🌲 treerockets blog

AI email vs AI email

or ... we do not need proof that humans are just stupid.

The "new" hype around AI is just simply ridiculous. The only people positively excited about the use of this (potentially) very powerful technology are people who have very little to no understanding.

One of the more recent proves for this thesis are the "AI email responder plugins" for various email clients and services. What's obviously going to happen is AI emailing each other. Because nobody is going to proof-read what the AI came up with as a response after 4-5 initial "let's see how this works" messages. But if nobody actually reads the messages ... well then... why send them at all? To pretend to do "work"?

Sooner than later we're going to be spiraling into an endless flood of AI generated content. The fun fact is, that the AI is going to try and "learn" from these responses, so it's going to train itself.

If you play this mind-game a little bit further (hopefully over the top) - as AI is training itself, the bullshit it creates is getting greater and greater and at some point it will become really difficult to find the cutoff point where the bullshit began. This will skew reality and facts to an extend that it's going to be quite hard to disproof of whatever it made up. Just one of countless comments on this What many people don't realize is that AI's like ChatGPT and the like are trained on public content. This public content includes all the bullshit on Facebook, TikTok, Twitter and also 4Chan and all it's even weirder and more broken clones. If you dive deeper into "casual conversation" with the current ChatGPT models, it becomes clear quickly how limited it is. It's full sentences alright - but that's where it ends relatively quickly. There are useful applications for this; no question. But the set is limited and always very specific. If you talk to "it" and you have some experience talking or dealing with tech support; the type of language "it" is using is very familiar. To me it's very clear it has been trained on a ton of chat-logs, chat-conversations and weirdly written support articles.

Not long ago it has been proofed that releasing AI "into the wild" is not such a great idea. This AI was called "Tay" and failed horribly. And similar problems persist. Currently Microsoft has to limit the conversation to "5 inputs" per session, since it gets out of hand so fast.

The rise of self-proclaimed "scientists" without actual research on whatever topic they are currently ranting on social media, conspiracy theories, extremist groups etc. doesn't make this any easier. How terribly wrong fake research papers (in this case Alzheimer) can go and what long term, terrible impact this can have is shown in this very tragic, overview of this event.

What this means is; we're steering towards something that can get out of hand, and out of control rather quickly and the outcome is going to be messy, ugly and dirty and very difficult to resolve. In this vast sea of information, who will be able to find the facts; who will be able to determine scientific facts over made up bullshit? And ... what tool are you going to use finding it?

#AI