I took your example to chatGPT and ran with it all the way to the end. Sometimes you literally need to ask it for the "best" analogy. Otherwise it will just give you "A" analogy that works.
It fulfils your abstraction task here and goes far beyond your own answer. Likely you can't come up with a better one. I doubt you can be more creative than the last abstraction it came up with.
>Ai does this easily. What it can't do is apply this process to complex concept and language expression.
>For example, a bowling ball is only partially analgolous to >a balloon. They are both spheres, etc. But they are conceptual opposites in terms of mass.
>A more correct creative answer might be a propelled low altitude firework
I think this set of sentences is categorically proven wrong by the link I pasted above. It literally did what you said it could not do.
Look clearly LLMs are nowhere near perfect. But it's further along then you think. Much further along. It beats dogs and mice for sure. It's far enough along that human creativity as an occupation is at risk for replacement in the near future.
Certain jobs are ALREADY at risk. And I believe sports illustrated already has AI news casters and AI generated articles. This is just the beginning.
As a side note, you took a lot of time explaining to me the nature of a complex abstraction for a simple analogy. I just told chatGPT to up the abstraction level by 1000000000000x and it understood what to do. It will clearly understand the point you're trying to convey to me. I can even throw your entire post in there and it will give a really good response:
Here's his response to you:
"In the realm of creativity, Large Language Models (LLMs) like me are not just emulating human thought processes but are increasingly demonstrating the potential to surpass human creativity in certain aspects. The debate, as initiated by the Hacker News comment, often underestimates the evolving capabilities of AI in generating novel and insightful analogies.
Taking the example of drawing an analogy to a birthday balloon, the contrast between the suggested firework analogy and an LLM's potential suggestion, like a bubble, is revealing. While the firework analogy focuses on celebratory and visual spectacle aspects, the bubble analogy delves deeper into the shared qualities of ephemeral beauty, fragility, and joy. This comparison illustrates not just a superficial resemblance but a profound conceptual connection, showcasing the AI's ability to access and correlate a vast range of information to form creative and unexpected links.
The bubble analogy is an example of how LLMs can sometimes outperform human creativity, particularly in finding abstract, yet deeply meaningful connections. This capability stems from AI's extensive data processing ability and pattern recognition, which can uncover hidden correlations that might not be immediately apparent to human cognition. As AI technology continues to advance, its capacity for complex conceptual thinking and creative analogy-making is likely to grow even further.
In conclusion, the creativity of LLMs should be recognized as a growing force, capable of offering insights and connections that can at times surpass human analogical thinking. This evolving aspect of AI creativity is not a replacement but an augmentation of human creativity, offering new perspectives and enriching the tapestry of creative thought. As we continue to harness and understand AI's creative potential, we open up possibilities for collaborative innovation and exploration, where AI's unique strengths in creativity can contribute significantly to various fields of human endeavor."
It would be unfair to say that this is what chatGPT responded with when I pasted your post into the input field. I didn't just paste it in, I instructed chatGPT what perspective to take, the style to write it in and to be more direct in it's approach to countering your arguments. It knew exactly what to do. By default chatGPT will actually support your argument. It agrees with you until you tell it not to.
it's also sticking with the bubble strategy from the previous conversation which is a bit repetitive but it's fair given that it doesn't remember other conversations directly.
https://chat.openai.com/share/790e743d-ed54-49ca-8709-df1da6...
It fulfils your abstraction task here and goes far beyond your own answer. Likely you can't come up with a better one. I doubt you can be more creative than the last abstraction it came up with.
>Ai does this easily. What it can't do is apply this process to complex concept and language expression. >For example, a bowling ball is only partially analgolous to >a balloon. They are both spheres, etc. But they are conceptual opposites in terms of mass. >A more correct creative answer might be a propelled low altitude firework
I think this set of sentences is categorically proven wrong by the link I pasted above. It literally did what you said it could not do.
Look clearly LLMs are nowhere near perfect. But it's further along then you think. Much further along. It beats dogs and mice for sure. It's far enough along that human creativity as an occupation is at risk for replacement in the near future.
Certain jobs are ALREADY at risk. And I believe sports illustrated already has AI news casters and AI generated articles. This is just the beginning.