
I read “How prompt engineering isn’t the future” and it made me reconsider the way we interact with AI. The author argues that manual prompting isn’t as crucial as we currently think it is. I agree with this, our interaction with AI now feels like the MS-DOS era, a bit awkward and manual. More natural language instructions should make AI control less clunky.
The article proposes that ‘problem formulation’ is the critical skill needed to unlock AI’s potential. This resonates with me.
In communication theory, it’s suggested that 70% of our communication is non-verbal, a concept termed ‘conversational implicatures’ by Herbert Paul Grice. He further proposed the ‘cooperative principle,’ suggesting that conversation is an implicit cooperation between participants, characterized by truthfulness, relevance, and clarity.
Looking at these principles, I wonder how we can improve AI prompting. There’s room to evolve. How do we make our problem more transparent and provide better context to ensure superior AI output? Can we incorporate Grice’s principles into our prompts for a more cooperative, understanding AI?
‘Few shot prompting’ might be part of the solution, but I believe there’s more to explore.
Any thoughts on improving the way we communicate with AI?