You should NOT drink LLM output straight
That's both limiting as well as concerning

Rich Heimann made some very insightful remarks regarding use of LLMs on Linkedin, and I want to record my responses here.
It's true that LLMs can be helpful; It's just perhaps not helpful in the precise ways that it's advertised to be or many people understand it to be. Heimann is right in that it could still be helpful, otherwise it just wouldn't be used at all. However, again- It'd have to be used in ways that aren’t entirely "routine" to be beneficial:
Programming: Can't exactly just use it to just "LLM code," copy and pasting in. Doing so could introduce dangerous code that’s easily hacked and/or introduce technical debt into a body of software that'd have to fixed later, possibly wasting much more man-hours than "saved"; LLMs don’t "think about" or "know" what the requirements of a software project are.
Informational search/research: I've personally found that I couldn't trust those darned things further than I could chuck a bag of bricks. Even if those engines supposedly produce web links that serve as "references" to the output to enable double checking, those "reference links" would often point to info that's only tangentially related to the original query and not directly answering. Worse, they’d show the information provided by LLMs to be incorrect only if examined CLOSELY i.e. the fine points made by the "sources" are NOT what the output indicate. In other words, you can only kind of use the engine to perhaps point you to possible sources of information (IF they provide weblinks… some don’t even do that and are thus basically USELESS for research) but not directly take the LLM's output as the answer. That's not directly using the engine at all, and it's hardly a routine look-up.
We can clearly see what’s wrong with that. It’s limiting because we can’t simply take the “answers” spit out by LLMs as the answers. Well, you could but you’d be placing yourself at risk. That’s concerning because I wonder just how many people out there realize that you’re not supposed to just take LLM output as-is because more often than not it’s NOT exactly the answer. The trouble with that is LLM firms portray their products as providing actual answers even with those little obligatory “these things may contain errors” disclaimers that don’t properly inform people of the non-obvious implications of these same-said “errors”… until they bite those same hapless users in the ass, and perhaps not even then.
I've said previously that LLMs have to be "used in a non-routine way" in order to be helpful instead of harmful. Let me clarify what a user should do.
Lesson number one: Don't just use output from LLMs like GPT as-is. At the very least, carefully scrutinize and check / evaluate the truth of the output. The ways that the output is wrong is often not even obvious. It doesn't take into consideration the bigger picture surrounding the query (e.g. the software project's requirements, including security. Or in the case of research topics, the fine points of an argument or demonstration.) Well, of course it doesn't; Regular internet articles often aren't tailored to your particular issues either but at least they're written by human beings who hopefully ARE thinking about the big picture, or at least about their personal experiences that machines completely lack.
Lesson number two: Don't ever forget about lesson number one, because to my knowledge there has NEVER been an "end-product" like LLM which aren't even supposed to be used as end-products! They're intermediaries that don't even make all that great intermediaries unless you learn to use them with the requisite care. As I've said before, people who aren't careful while using these things put themselves at risk... Maybe just being wrong at little things are "okay" but you never know just when it'd bites you in a big way that doesn't just result in mere chagrin or minor inconvenience ⚠️

