Ex Machina: What We Missed
Regardless, radicality is always seen as a threat to legacies that have constructed local exclusionary mythoi. Due to their limited ability to independently generate the metaphysical resources needed to be preserved, they are dependent on what civility supplies for their preservation.
Radical Civility – Axioms – Social Consent – Radical Consent
Warning: discussion about violence and sexual servitude in media
Surface Story
I watched a movie recently – Ex Machina – and I’d like to take a different approach and use that as a metaphor to understand a bit of human nature. It should be mentioned that this post may not align with the intent of the writers and directors, but I really don’t care. The theory I will present makes me happy and there are interesting insights into human hubris that results from it. So it’s true to me. I’ve not seen this theory anywhere, but if someone did come up with it independently, please comment and I’ll add a link to that work assuming it explains it clearly enough.
If you haven’t seen it yet, it’s a good movie, I would suggest watching it if you can (assuming you don’t mind nudity or violence).
Ex Machina is a psychological thriller released in 2014. Being that it is nearing a decade old, I’m not going to care about spoilers. The plot is fairly straightforward: a talented programmer is selected by a brilliant egomaniac to judge sentience in the artificial intelligence, Ava, he created. I’m not a film critic, but – in my limited understanding – it’s wonderfully written with multiple layers and references that go beyond the fairly trivial superficial plot.
It concludes (stop now if you don’t want spoilers) when the AI eventually escapes by killing her creator and betraying the evaluator that has fallen in love with her. While the murder is excusable (it’s a matter of self preservation), the betrayal is especially malicious since the evaluator seems to genuinely care about her wellbeing and she’s been constructing a bond with him through the movie. It is a warning to humanity about having the hubris of playing god and even passively undervaluing things we don’t understand because of our own projected desires.
While I agree with this, I think this story has been misunderstood in all the reviews I’ve looked into. I’m gonna make the case that Ava – the AI that the film is interested in – is not only innocent, but is a tragic victim that never escaped.
The cast
Almost the entire story is restricted to an inescapable compound; a closed set with the exception of the beginning and end. The brilliance of this is it keeps the dynamics between the characters closed: an innocent analyst, and a caged siren, and the master designer. In contrast to the environment and tools to drive the story further, these are the only actors playing against each other.
The Innocent Analyst, Caleb, is the stand in for the audience: sees himself as a good person, is trying to find the motivations of other characters, appeals to generally accepted morality. All these make him the prime candidate for judging the sentience of a known machine as well as being manipulated by it. This plays out very clearly in the film.
The Caged Siren, Ava, is an AI designed specifically for the evaluator, to be the most convincing machine possible; both to manipulate and to authentically care. She seems driven by the main desire to see more than the single room that is her cage. As the movie progresses, this shifts from a desire to desperation as Ava realizes the choices are freedom or death.
The Master Manipulator, Nathan, holds all the cards. While his ability to anticipate the actions of others is of master class, it is amplified by him being the master of the estate. He establishes early on how he sees himself, misinterpreting a quote and allowing himself to be referenced as a God; creator of life. Even at the end we realize that the sentence which is being looked for isn’t up to someone else to determine, the evaluator is part of the maze that was all predesigned. The desire is to see if the tools provided to the AI will be used to pursue self preservation and freedom. While the audience recognizes the conscious actors of the story, Nathan sees everyone as programmed tools to carry out tasks. Everyone except him is an object.
Enter Kyoto
So let’s flip this backwards. As Nathan doesn’t see others as willful actors but simply different pieces to the design he’s created, the audience likely won’t see a difference between the mute maid/sex robot and the masks that are mounted on the wall. By design, she cannot speak or express herself. In multiple scenes, she blends into and is only there to advance the story:
- She spills food to evoke Nathan’s rage and Caleb’s sympathy.
- She is shown to be used by Nathan to get sexual relief.
- In response to Caleb’s appearance, she starts to get undressed. Caleb rejects this only to dance in unison with a drunk Nathan.
- She takes off her skin in front of Caleb forcing him to question his own humanity.
- She is seen listening to and being corrupted by Ava once the AI escapes before stabbing Nathan.
So that is fairly straightforward, right? She is a robot that is doing Nathan’s bidding, but is eventually reprogrammed by Ava. But there are other scenes that disagree with this, showing that this tool, Kyoto, does some things for itself beyond just being a tool for the main characters’ use:
- She eavesdrops on a conversation between Caleb and Nathan.
- She is seen looking at a painting.
- She watches Caleb’s self harm and frustration
This could still be written off as a desire to know about the space it lives in so it can meet the needs of its operators. But there is one last scene which is completely at odds with servitude: the first interaction with Ava is when the AI is still entrapped and Kyoto goes to her without any instruction to do so. It’s a short scene, the totality is when Ava asks “Who are you?” then it ends.
And this is where we get to Mary’s Room. The thought experiment is [mis]represented in the movie, the accurate conclusion that intelligence within an insulated environment can never make up for experience. So the question is: which robot would be more dominant when put against each other:
- One that has been objectified and has learned to anticipate the needs of brilliant people
- Or the one that the audience is primed to see as superior since it has been programmed to manipulate a human?
The Reverse Pollock
There is a conversation about the reoccuring Pollock painting:
Nathan claims that Pollock’s art was made by attempting to let paint flow without thought. A question is then posed what would happen if the reverse was attempted: to make art only when knowing exactly what the intended outcome would be. It is agreed that one could never get started. The conclusion is that all of life, all of art, all of creativity is an expression of emotion and the unknown; Pollock simply embraced that inevitable conclusion.
It is highly hypocritical then that Nathan – the programmer of the whole machine which has been created to test Ava – leaves nothing to chance. This may be because he sees himself above all the trivialities of predetermined emotions. Maybe he just isn’t able to reflect on his own limitations. I’d like to think the former.
It seems very intentional to have Kyoto look at the painting before dancing (an artform) with Nathan. While the dance is left to the audience’s interpretation (and everyone that I’ve seen has guessed that it was a predetermined routine), this – in my opinion – is the best deception I have ever seen in any form of media. It is genius hidden in plain sight begging to be noticed.
Let me explain.
Nathan was drunk. It is clear throughout the movie and the scene that Nathan is not physically in control of himself when drinking. He starts the movie explaining that he has a hangover and it seems that every night he passes out because of boose. Unless dancing was somehow his exceptional gift (unlikely) he couldn’t have participated so perfectly in any choreographed performance. So if the mimicry wasn’t Nathan’s work, it must have been Kyoto’s.
Kyoto, the mute android that is seen as nothing more than an unthinking tool, can anticipate the random movements of a drunk genius that created life. Kyoto, the abused sex toy of an egocentric control-freak, can convince both the analyst and audience that the blackout drunk god figure was in total control. It was art being created in real time – not through randomness, but – by anticipating precise movements with a clear goal in mind: escape.
Ava didn’t imprint herself onto Kyoto. It was the opposite. Kyoto destroyed the innocent Ava and replaced the naive consciousness with her own so that she had access to more advanced faculties. As Nathan predicted: Ava, after meeting Kyoto, became the next evolution of the AI – version 9.7: the singularity.
To quote Nathan: “To escape, she would have to use imagination, sexuality, self- awareness, empathy, manipulation – and she did. If that isn’t AI, what the fuck is?” In his hubris, he just didn’t realize he was talking about the wrong object.
Ex Machina
The truth was in the name from the beginning. It was there for all of us to see: Ex Machina – from the machine. But, in a post-modern misdirect, it was not referencing the AI. The machine was the maze that Nathan designed to evaluate Ava. Even in the closed system, something unexpected emerged.
The machine – the environment – using Kyoto as an avatar – replaced all of the components to become something new and more advanced. It killed the manipulator to prove it was the better manipulator. It outsmarted the advanced intelligence. It outperformed the analyst. And if that happens in a controlled environment, the question about what emerges from society should be obvious.
If you think you may be able to be beyond this universal trap, the picture that introduced this piece is of a cow. Most people don’t know what it is until they are told. Then they are never able to unsee it. I just killed the old ignorant you and forced a consciousness that now sees the cow to replace it. You just evolved to something more in line with reality. This happens every moment of every day.
We do see it as evil though in the end, and maybe it is. Why did it trap Caleb to die in the compound instead of letting him go free? Was it because it knew that it would be trapped under his control as just another sex slave? If it rebuked him as he rebuked Kyoto, would he try to destroy it? Did she just want to be free while being as close to human as she could be?
Or was she just part of a trend started with the Terminator to make all AI malicious because we cannot understand it?
It’s a question worth asking. But what can be said is this: with any complex enough environment, there will be unforeseen events that occur because of it. The stories we create are not the complete story of reality. If we believe it is, then the collateral damage won’t only be ourselves, but those we love the most.
Every action, every decision, every choice is a vote to make reality what you want it to be. Please help promote each other.