A few experiences and thoughts on (chat)GPT and AI

Thoughts on AI

Like many people, I’ve been trying out ChatGPT and similar tools recently, and I decided to share some of my experiences and thoughts.

Table of contents:

As a tool for coding…

I’m the only Frontend Developer at my current job, so I don’t have colleagues to talk to about problems I encounter, so you can imagine that the idea of a relatively knowledgeable AI companion sounds like a good way of bouncing ideas off and fixing problems! On a few occasions when I was stuck with a complex problem in my daily work, I found talking to ChatGPT quite useful; I told it what I was struggling with and it actually managed to suggest some slightly different approach that was a good suggestion. It’s a good way to see if your assumptions or all your decisions so far have made sense — conceptually at least. On another occasion, I could actually ask it a question about a browser feature and it answered it.

When it comes to generating code examples, however, the output wasn’t always that useful. In more than a few cases, specific requirements that I set for the code were forgotten as we worked through the details. When I asked it to correct it, it forgot other relevant requirements I set.

In other cases, ChatGPT came up with code that called browser API’s that didn’t exist, and when I asked it to provide source material, it generated titles and links to plausible-looking articles and URLs, which didn’t actually exist (all of them returned 404s).

As a tool in writing…

Lately, I’ve been playing around with AI and trying to use it to improve my writing. Specifically, I’ve been running some basic daily journal entries through AI and then asking it to rewrite them in the style of famous writers that I named. It’s been fascinating because while I don’t feel particularly expressive in these journal entries, GPT is able to produce a more vibrant piece of text in the style of, say, Virginia Woolf which is more of a pleasure to read and manages to better express how my day was.

Depending on what author you ask the AI to work with, it seems to pick different themes out of what you write, spending more time on certain themes than others. Some will expand on social interactions whereas others are more matter-of-fact. Some will gloss over intimacy and others will highlight it, for example.

I can see that this has the potential to improve people’s writing because even if you’re not that emotionally expressive, you can see how one can richly express the thing they are trying to write. Comparing your own input with what AI produces can be illuminating.

Outside of that, having basically a “thesaurus you can talk to” is very useful. I sometimes find myself looking for a word with a specific meaning or a nuance, and ChatGPT definitely delivers there in a way that the typical thesaurus can’t.

Finally, I use (chat)GPT regularly to process transcriptions of voice notes (for verbal brain dumps, rants, or ideas for things to write). The transcriptions generated by Whisper (I use the super useful app MacWhisper) are really good and accurate transcription, but its output is optimised for things like subtitles and doesn’t look too good.

Voice notes (especially brain dumps) aren’t always super smooth, contain umms and other crutch words, and tend to contain redundant statements, so throwing it all into GPT and telling it to glue sentences together, insert punctuation, paragraphs, remove redundant statements, add headers, and other optimisations is a huge time saver.


Like with the coding examples I gave, ChatGPT seems to easily forget certain specifications or requests you made in the past (in the same conversation).

For instance, I recently — as an experiment — asked it to help me write a summary of my activities at my current employer for my website, and as I wrote a rough outline of what I did, ChatGPT asked me some questions to help me flesh out the different segments. This seemed really cool, but when it came time to output something roughly resembling an article, it left things out.

That is a problem, and unfortunately not one that it points out willingly… I had to notice it myself. While AI can smooth out roughly written or rant-like texts and summarize things when you ask for them, it should follow instructions explicitly.

In the end, in the current state of AI products like this it is too much of a black box. Perhaps I’m (like many people) expecting too much from what is basically a highly-advanced autocorrect, but the product should be accountable for its decisions and be able to communicate “editorial decisions” like these.

I understand that with evolving instructions during a conversation, it can be tricky for the AI to follow them accurately. Therefore, the writing interface should make (active) instructions explicit, including the information shared with the AI, such as requests for tone and grammar.

In short, the AI should be more explicit about its decisions, especially when it cannot follow specific instructions.

Potential: A tool within companies

One thing I haven’t heard many people talk about when it comes to AI is its ability to form a mental model of products, services and companies.

In larger companies, there might be many internal knowledge-based articles that reference the same feature or certain processes, and considering that documentation is always one of the last things to be updated, people like customer support agents might be working with outdated information without realizing it.

Having AI analyze what forms of documentation might be affected by policy changes can help companies ensure that they are being thorough, especially with protocols that often get forgotten when new product features or new directions are decided. This is especially important when there are edge cases that will be affected. Tools that detect these issues during the process of designing protocol and policy seem invaluable to me, as it prevents confusion afterward.

A good AI model of an organisation and its policies can also help customer support agents in helping customers with complex questions or edge cases with more confidence and without having to escalate.

AI chatbots nowadays are rather basic, but with a thorough understanding of a company or product, they can be a lot more effective. It can also assist with generating effective and accurate help documentation based off this mental model, lightening the load on support agents.