This year has seen more than enough hype about AI of all kinds. More than enough.
It is obvious that unassisted AI is laughably incompetent, and will "replace" only the most mindless human activities. (Which is why it is so scary to the media and politicians.)
But one area that is really interesting is just how clever AI can be used as a tool for humans.
This fall, Didem Gürdür Broo reports on personal experiments using generative AI to design a robot and other projects [1]. He considers this a "co-design" process.
Broo notes that this approach generally works best with well structured, well studied problems. The AI can quickly find new and different solutions for the humans to try out.
"In these kinds of engineering environments, co-designing with generative AI, high-quality, structured data, and well-studied parameters can clearly lead to more creative and more effective new designs. I decided to give it a try."
His first experiment used AI image generators to create pictures of an imagined jellyfish robot. This was done by issuing prompts to the AI system, which generated images. Getting something out of this process required learning how to make good prompts, specific enough to get a relevant result, but leaving open as much as possible to allow for surprises from the AI.
There were plenty of surprises. For example, a request for "technical design" resulted in an image with nonsense text. Presumably, this reflects what "technical drawings" look like.
He also found that the AI systems showed jellyfish like designs quite nicely, but couldn't draw an octopus like design. Who knows why?
Ultimately, the AI generated images inspired thought, especially about the aesthetics of the robot. But none of the designs were directly used.
Another design problem he tried was "to try to illustrate the complexity of communication in a smart city." He found that, at best, the images "represent many of the individual elements effectively, but they are unsuccessful in showing information flow and interaction." I.e., the AI doesn't understand the idea being illustrated, though it may be able to draw some of the elements.
The failures still might be useful. "Even the images that didn't accurately convey information flows still served a useful purpose in driving productive brainstorming."
Overall, the process was "unpredictable"—" human users have little control over AI-generated iterations". It's hard to drive the result, and there is no guarantee that additional iterations will improve the solution. Broo thinks that newer versions of the ML are getting more predictable, though one really wonders.
In the end, Broo thinks that this kind of co-creation can help designers. Current versions of image generators are "a way of creating a little bit of chaos before the rigors of engineering design impose order".
That's one way to put it.
This is an interesting conclusion, because this is not what these tools are designed to do, as far as I know. (And, by the way, making them more "predictable" makes them less "chaotic", I would think.)
So…if chaos is what you want, would it be possible to deliberately design a generative AI tool to be a chaos generator? Would this work even better?
- Didem Gürdür Broo, How Generative AI Helped Me Imagine a Better Robot, in IEEE Spectrum - Artificial Intelligence, October 14, 2023. https://spectrum.ieee.org/ai-design
No comments:
Post a Comment