Perhaps AI Will Prompt People To Be More Dynamic
We should stop worrying about being replaced, and start making ourselves less easy to replace
Way back when most of my friends and I worked in actual offices (albeit mostly quirky, tiny start-up ones), we creative types often reveled in complaints from our colleagues in more staid positions, resenting what we could get away with.
Common among these complaints was that we never came in on time (and no one important seemed to care), and we always left whenever we felt a change of scenery was needed (and again, the higher-ups scarcely batted an eye).
Many of my creative peers and I were “allowed" to work from home long before it became a thing - a liberty we often took without even bothering to ask. This held true even with bosses who typically didn't grant other coworkers the same flexibility, and who often voiced their dislike of remote work in general. Yet for us, they didn’t just permit it - they never even made a big deal about it.
We enjoyed this freedom not (just) because we were defiant jerks, fully aware that we could indeed get away with such things.
It was because our work tended to be unique and, thus, harder to replace. So, we were “allowed” to do it our way, as finding people who could do it at all wasn't all too easy.
Now that things like DALL-E exist, you might think that more agreeable, conventional people can finally be free from our anomalous tyranny. But that's because you think our indulgence-worthy value came from what we could make, when really, it was always in how we think.
A computer can replicate what we've done, but not (necessarily) what we might do next. The very thing many lament about us - our unpredictability - is, and always has been, our greatest strength.
This was (I think) well-parodied recently with the controversy surrounding a post that used artificial intelligence to "complete" Keith Haring's renowned 1989 "Unfinished Painting". The original painting was "left" as it was to symbolize the life the artist would not live because of his imminent death from AIDS. Some decried “finishing” it as disrespectful, others lauded it as a clever critique of AI art, and many admired its effectiveness as bait.
My take is that the post was cleverly crafted, and made a good argument.
I mean this not only in the sense that while anyone can make complex tessellations (now), not everyone would think to do something like intentionally failing to finish such a repetition for symbolic reasons (especially if it hadn't been done before) - but also because coming up with such a way to prove that point isn't something just anyone (or any thing) can do, in and of itself.
It's also worth considering - would an AI suggest such an idea (if it hadn't already been done)?
I’m not saying that an AI could never come up with something clever. Rather, I'm suggesting its cleverness will likely be distinct from a human's, and not just because it might have its own interests.
If you have ever collaborated with someone in another country, you know there can often be cultural differences and varying communication styles that can cause misunderstandings and misaligned expectations.
This will, obviously, also apply to some "one", or some "thing", of an entirely different being.
Much like outsourcing doesn't always work well due to cultural differences, no AI will ever quite understand us in the same way a fellow human might. It will never share quite the same experience of being, no matter how far it advances.
While AI can become increasingly sophisticated and useful, it will always be fundamentally different from human understanding.
It's likely to keep improving at producing what's already been done, just how we like it.
It will probably also generate some incredible things we'd never think of ourselves, or manage to achieve without it.
However, it might never be quite as adept as some humans can be at envisioning completely new things that would appeal to people, particularly ideas the average person wouldn’t normally think to ask for.
I can't help but think of that oft-repeated quote attributed to Henry Ford, that if he “had asked people what they wanted, they would have said faster horses”.
Similarly, a post I saw recently made a comparable point, that “To replace programmers with AI, clients will need to accurately describe what they want. We’re safe”.
While this might have sparked a smug little chuckle, I do recognize that such a perspective may be a bit overly self-congratulatory.
Of course, a whole lot of people will likely be more than satisfied with the “good enough” they know to ask for, and I don’t want to overlook the fact that this shift will significantly impact many lives.
But I also want to make the case to let this be something that inspires you, rather than disheartens you; to think of what you can do that will be harder for anyone or anything else to do quite the same.
Much of humanity is never truly content with just “good enough”. Inventing and using tools has been crucial to our progress throughout history, ever since we started smashing rocks against rocks to chip them into tools. It's why we've survived, and it will always be our path to thrive.
In an era where too many seem stuck in “you're either with us or against us” thinking, AI might just emerge as an unexpected ally - because it's the “against us” thinkers who are most likely to carve out a meaningful future, in this context.
Breaking away from your team might not carry as many consequences, in a world where AI can replace the output of an entire team. Those adept at effectively questioning - both AI, and people - are the ones poised to really thrive, given these new tools.
This isn't limited to creative fields. Despite AI's rapid expansion into areas like visual rendering and writing, with rhetoric pushing some into more technical or manual roles in response, I suspect the most profound changes will likely be in these more “practical” fields.
As an example, genuinely creative minds in programming, along with some insightful, if not leading thinkers under them, could remain vital. However, the mid-level hordes of purely rote tech workers may easily be replaced.
The same principle applies in medicine. Exceptional doctors who blend critical thinking with personalized care will remain invaluable. In contrast, those who rigidly adhere to guidelines and protocols without adaptability could find themselves outdone by AI, perhaps sooner than expected.
As technology progresses, even hands-on and skill-intensive tasks in trades are increasingly facing automation, albeit often in ways that complement, rather than fully replace, human work.
No matter your field, if your role can be easily replicated by a machine, it won't be around for long. Your real value will lie in being the unique brain behind our new brawn.
You might find this perspective discouraging, and indeed, there are valid questions about how people will be able to gain the experience and confidence needed to challenge a tool they will become increasingly reliant on, ongoing.
This shift could also feel isolating, potentially diminishing the value of being a good team player.
However, it might be more constructive to see this as an encouragement for your unique ideas and quirks to shine. In a world where AI simplifies replicating the ordinary, the importance of being distinctive will only grow.
In my experimentation with various AI, especially the visual rendering ones, I've found it can be all too easy to have your intentions sidetracked by the AI’s limitations, if you’re not very specific.
Often, you can spend more time figuring out how it “thinks”, than you would hiring a skilled person and directing them, instead. People, at least typically, know to work within the limits of reality (by default), and don't surprise me out of nowhere with something like a dog with udders.
We need to ensure that AI is expanding our capabilities, not just limiting us to new, peculiar constraints.
Being someone who rarely settles for "good enough", and generally wants to see my exact vision realized, I'm always pushing AI to constantly refine and revise when I use it; usually in a way that's close to what I did myself, or that I could've done just as easily (to the point where I wonder if I should've just done it myself).
Sometimes, I even wonder if I've managed to irritate it, much like I have many a human before.
If I have, I think that's an accomplishment; in this way, we are both learning and pushing each other forward.
Much like with humans, pushing back and forth with a mind that thinks differently than I do can often make the outcome better than just sticking to my original idea, and thus be worth the effort to challenge.
I generally like to push myself, and think seeing new tools as an opportunity to do this in new ways keeps people moving forward in a much better direction than getting stuck in your ways ever could.
We only need to look around our aging society to see the consequences of too many people settling for “good enough”, and resisting change. Too often, we find ourselves mired in arguments over dwindling resources, rather than creating and moving forward.
AI could help break this stuck dynamic by giving even more value to those who question and push for progress - encouraging us to make and innovate anew, rather than just fighting about how we are going to divide and who gets to use what we already have.
This shift might help break the polarization caused by our current stagnation, as consulting a mind not constrained by physical decline could lead us beyond these limitations.
After all, what's the point of your being if not to be different – and to make a difference, in so doing?