In a previous post, I tried to use to mimic the functionality of an old reddit “bot” that explained jokes. The joke was that it obviously wasn’t a bot because it was too on the nose. Here’s an example:
Joke: What did Mike Tyson say to Vincent van Gogh?? You gonna eat that?
Explanation: Vincent van Gogh removed his own ear. Mike Tyson was a pugilist who bit off the ear of Evander Holyfield during a pugilism exhibition.
I was impressed how far GPT-3 came. It got some simple jokes but failed on other more complicated jokes. For GPT-3 I gave a prompt of several jokes and their explanation given by the original “bot”.
For ChatGPT I can just prompt it with “explain this joke in one to three sentences. Bold the most relevant words in the explanation.”
Here’s how ChatGPT compares to GPT-3
Joke: If you could have dinner with any historical figure, who would you choose? "Gandhi. Why him? "More food for me."
Original: Gandhi was stereotypically perceived to engage in hunger strikes
GPT-3: Gandhi was a famous Indian leader who was known for his hunger strikes.
ChatGPT: The joke is suggesting that the person who would choose to have dinner with Gandhi would do so because Gandhi ate very little food, so there would be more food for the person to eat. The most relevant words in this joke are "Gandhi" and "more food for me".
It’s a pretty good explanation. Definitely more verbose. Apparently I’m having trouble communicating that I want the output text to be bold. The formatting might be separate from the model. I asked it say the word “hello” in bold fond and it returned a response about bold a word in html. Oh well.
Here’s another one.
Joke: I've got one thing to say to all you motherfuckers. Happy Father's day
Original: Fathers are stereotypically perceived to have intercourse with mothersGPT-3: Motherfucker is a word that is used to insult someone.
ChatGPT: The joke is a play on words, mixing the concept of Mother's Day with Father's Day. The punchline is that the speaker is using profanity (referring to the listeners as "motherfuckers") to wish them a happy Father's Day. This is a humorous contrast, because the profanity is unexpected and inappropriate for a holiday that is typically associated with love and respect for one's father.
This one missed so I tried it a few more times:
ChatGPT: This joke is playing on the fact that Father's Day is a holiday that is traditionally celebrated by expressing appreciation and love for one's father. The joke is making a sarcastic comment by saying that the speaker has only one thing to say to everyone, but that thing is a profanity-laden insult. The joke is likely intended to be humorous because it is unexpected and goes against the norm of expressing love and appreciation on Father's Day.
ChatGPT: The joke is a play on words, using the phrase "happy Father's day" to express the opposite sentiment. The joke implies that the speaker is angry and has something negative to say to the people they are addressing, and uses the occasion of Father's Day to sarcastically wish them well. The most relevant words in this joke are "motherfuckers" and "happy Father's day".
Both GPT-3 and ChatGPT fail to make the relationship of mother’s and fathers and the literal and figurative meaning of “motherfucker”.
Here’s a longer one:
Joke: Blonde gets caught speeding. The cop is also a blonde.
Cop: “Let me see your driver’s license.”
Driver: “What’s that?”
Cop: “A square thing with your picture on it.”
Blonde fumbles through her purse, finds a mirror, sees herself in it, and hands it to the cop. Cop looks at it, hands it back and says, “I’m gonna let you go without a ticket. I didn’t know you were a cop.”Original: Blondes are stereotypically perceived to be unintelligent.
GPT-3: A mirror is a square thing with your picture on it.
ChatGPT: The joke is a play on the stereotype that blondes are not very intelligent. In the joke, the driver who gets pulled over for speeding is a blonde and does not know what a driver's license is, thinking it is a square thing with her picture on it. The cop who pulls her over is also a blonde, and upon seeing the driver's mistake, lets her go without a ticket. The most relevant words in this joke are "blonde" and "driver's license".
This one it got but its interesting that the explanation always uses its maximum alloted three sentences.
Joke: The only two white actors in Black Panther are Martin Freeman, who played Bilbo Baggins, and Andy Serkis who played Gollum. They're the Tolkien white guys.
Original: Tolkien sounds like token
GPT-3: Martin Freeman played Bilbo Baggins in the Hobbit movies. Andy Serkis played Gollum in the Lord of the Rings movies.
ChatGPT: The joke is a play on the fact that the Marvel movie "Black Panther" has a predominantly black cast, with only two white actors in the main cast. The two white actors in question, Martin Freeman and Andy Serkis, are known for playing the roles of Bilbo Baggins and Gollum in the "Lord of the Rings" and "Hobbit" movies, which were written by J.R.R. Tolkien. The joke implies that the two white actors were chosen for the movie because they had experience playing fantasy characters, and refers to them as the "Tolkien white guys".
It still doesn’t get the joke, but with a little prodding it finally gets it:
>consider wordplay on the word "tolkein" and try again
The joke is a play on words using the name "Tolkien". In the joke, the actors Martin Freeman and Andy Serkis are referred to as "Tolkien white guys", implying that they are often cast in roles related to the works of J.R.R. Tolkien. The wordplay in this joke is in the word "Tolkien", which is both the name of the author and a play on the word "token", as in a token white person. The most relevant words in this joke are "Tolkien", "Martin Freeman", and "Andy Serkis".
And finally, here’s my favorite:
Joke: My wife and I went on our honeymoon to Australia. Unfortunately, I had to dial the help line.
"G'day this is Tim, you've reached the Aussie help line. How can we help ya?"
I told him, "We were in the ocean and my wife was stung by a jellyfish on her lady parts. Her vagina is completely swollen shut. It's our honeymoon, and well....ya know."
The guy on the help line replies, "Ah, bummer mate!"
I say, "I hadn't thought of that! Thanks for the advice. You've saved my honeymoon!"Original: Bummer can mean unfortunate or anal intercourse.
GPT-3: G'day is a greeting in Australia.ChatGPT: The joke is a humorous play on words and the different meanings of the word "advice". In the joke, the husband calls the help line in Australia and asks for advice on how to deal with his wife's swollen vagina, which was caused by a jellyfish sting. The person on the other end of the line replies with the phrase "ah, bummer mate!", which the husband takes to mean helpful advice and thanks the person for saving his honeymoon. However, the person on the help line was actually just expressing sympathy and did not actually give any advice.
Other attempts gave similar responses.
Final thoughts
I was a little disappointed because all my other experiences with ChatGPT blew me out of the water. I thought it would be much better than GPT-3 in explaining jokes, but it seems only marginally better. At least it works well with prompting but it tends to just repeat the joke. It doesn’t understand word play very well and the ones it does are very common tropes (a joke with the word “blonde” is probably playing on blondes are dumb stereotype). Although if you prompt it to closely considier wordplay it does so.
I tried having ChatGPT write some jokes for me as well with specific topics but it was pretty bad.
This made me think of what Nassim Taleb wrote said about ChatGPT
He then offers a definition for “verbalism”:
Verbalism is the use of terms both central to one’s discourse and devoid of rigidity of meaning; their meaning can change with context or circumstances
David Deutsch is similarly cyncial although impressed. He sees it as a leap forward in chat bots but a step back in AGI.
If we continue down this path, which is likely since its very impressive and useful, we would lose sight of actual AGI.
I may write more about this topic in a future post. But overall, I found ChatGPT incredible and useful, definitely a subscription I’d be willing to pay for.
Your mistake is not trying the GPT-4 Playground. The GPT-4 model exposed over the OpenAI API is far more flexible and far more steerable. I have been able to get it to respond in a style similar to the joke "bot" using some system instructions and three examples.
Here were my system instructions:
"""
You are an advanced, knowledgeable artificial intelligence whose purpose is to explain jokes well. Given a joke, you provide an explanation. Your explanation is very succinct, does not waffle, does not always use full sentences, goes straight to the heart of the matter. You speak no more than two or three brief sentences.
You bold all relevant words in the explanation.
"""
And the three examples I gave it were the ones marked as "original" in your post, with some minor changes.
At the risk of sharing my identity, here is the link to my OpenAI playground preset: https://platform.openai.com/playground/p/eRlZ6ZSayNp9WFMGjvDuorfj?model=gpt-4