|
A common metaphor about the potential dangers of AI is called “paperclip
maximisation”. The idea
is that an AI tasked with producing paperclips could figure out that it can
produce more paperclips by e.g. taking over the entire galaxy to turn as much
matter as possible into paperclips.
Sometimes I wonder if software engineers are prone to a similar issue. We
believe we’re tasked with solving as many problems as possible, and realise that
if we increase the number of problems, then we can increase the number of
solutions.
This could be described as “problem maximisation”. It might be subconscious, but
there often does seem to be a bias towards elaborate solutions that produce
their own sub-problems that then need solving.
The Systems Bible touches on this with “new systems mean new
problems”, just
with the idea that this is intentional made a little more implicit.
View post:
Problem maximisation
|
|
|
|