ChatGPT is ...
title: ChatGPT is ... tags: AI published_date: 2023-02-13 make_discoverable: true meta_description: is_page: false
ChatGPT is ...
I didn't really want to talk about this piece of garbage, but here I am. I have very little reason to use a LLM, I simply have no use for it. In my strong opinion, it's certainly a great tool for cancer research or other very specific tasks. There it can be trained pretty well for accomplishing a very specific result; finding early signs of lung cancer for example. But ... I wanted to experiment and see what the hype is about. I chose a field, where I thought it surely scraped (stole) enough data to give me a semi descent result. Having a crude understanding how the LLMs work (essentially guessing the next most likely event after a word), I though, a lot of (stolen) data should give good results.
I'm not good a programming and coding. I somehow have a hard time learning a programming language so I can write it without looking up almost everything all the time.
But - due to a very long time working in technical support for various bits of technology and being a mechanical engineer, I'm very good at understanding "what is wrong" and "how could this be fixed". It's a very weird skill to have, but almost all the time, I can look at something and figure out what's wrong with it quite quickly; and come up with some idea to fix it. This can be code, my lawnmower engine or many other things. It's quite handy actually...
However, combine this with ChatGPT; and you get code that is working quite well - I thought. I can tell it what I need, I'll spit out something glued together out of bricks, spit, poop and some code; most of the time it doesn't really work, but kind of... but then I can look at it and see whats wrong. Can I write code now? No. Can LLMs write code? No. (I tried a few) But it's a bit easier for me to make code do what I want - but not really. I tried this a couple of times for different small projects, and it turns out, it doesn't save me any time at all. The code is garbage, doesn't work most of the time and I have to put a lot of effort and time to fix it.
In short - it's garbage. A very simple task, un-acomplished. It can't even answer simple questions I can easily find answers for with almost any search engine. Challenging the LLM then - stating that what it hallucinated is wrong, it claims to be "sorry" and gives me a yet again wrong answer. Why is everyone excited about this? No clue.