by Jeffrey Huang, January 21, 2021
Jeffrey is pursuing a B.A. in Psychology at Stony Brook. He enjoys technology and is always keeping up with the latest hardware releases.
***FALL 2020 CONTEST SUBMISSION***
Technology. Such a simple concept of scientific application can have many implications for our lives, history, and the world. When people talk about the dangers of technology, popular media has us thinking about robots, and by extension, Artificial Intelligence (AI). Media like The Matrix franchise, Person of Interest, or 2001: A Space Odyssey (either novel or film) highlight the potential that AI has in terms of changing our world through apocalyptic means, a dystopic society, etc. Would you believe me if I told you we were already, in some respects, in such a society today? For example, think of any time where you might have been discussing something in the open with a friend, but then see advertisements the next time you browse the web? Such was most likely the work of AI, or more specifically, Machine Learning (ML) Algorithms. The overhead of employing many human listeners would usually be too much, so such work would be mostly delegated to automation. This essay is a culmination of what I’ve learned from Professor Brennan’s PSY 369, Psychology in the Age of Intelligent Machines, and what I know as a computer enthusiast.
The AI singularity seems like a technological milestone that humanity may never reach. However, despite less than ideal implementations of such technology in areas like human language translation, we have progressed significantly over the course of computing history. In terms of literal definitions, automated systems can pass the Turing Test, as it was written in the 1950s. Nobody uses the Turing Test in the way it was originally written; it is more of a thematic test of achieving near-human systems. I would recommend VSauce’s video here for reference (Stevens, 2017). Some key examples of AI efforts include major companies you’ve probably heard of. Tesla is one among many automobile manufacturers developing autonomous driving (“Autopilot AI”, 2020). Google, being the technological juggernaut it is, has general AI research and development along with custom computer chips (“Google AI”, 2020). Even Boston Dynamics is using AI, with some autonomous functionalities built into the Spot lineup of robots (“Spot®”, 2020).
Most prominently, AI is in the social media we use every day, through feed recommendations and those all-important corporate advertisements that provide these services to you. Most of these efforts are not usually made with malicious intent, but their implications may be anything but benevolent. Despite me writing this essay, the majority of users will not really notice AI as it slowly creeps into our lives. It’s like the rising sun; minute by minute, you don’t notice the incoming light. But, over time, if you were to compare your first minute to your last minute; assuming the sun is up, you notice this marked change. It’s a similar concept here. Slowly, technology and AI integrates itself into our lives until we don’t see its marked change before. For reference, ask older individuals what life was like before the internet, or the proliferation of accessible personal computing. We’ve advanced from expensive IBM compatibles to Chromebooks.
The issue with AI is not so much the underlying technology, nor the idea behind it. Like many concepts, it is a good idea on paper. In practice, nobody adheres to what might make it great. In the interest of expediency and cost-effectiveness, companies push forth half-baked implementations where we see more negative consequences. AI/ML algorithms are simply algorithms. They attempt to predict, with less precision than the real world, an approximation of what should be. It’s like walking through your house blindfolded. You probably won’t fall or seriously injure yourself, but seeing with your eyes is much better than relying on memory reconstructions. It’s similar with AI. However, instead of stumbling blindly in a controlled, familiar environment, such systems are thrown into the real world. Instead of being blindfolded in your house, you’re blindfolded and randomly placed on a football field. Imagine the disorientation – you would have no idea of where you were relative to the rest of the layout of said field.
AI’s implementations in the current justice system and in job selection are akin to these blind analogies. While you might think that this is due to evil programmers, this is not usually intentional. AI algorithms are mysterious in that regard. Given current frameworks and paradigms, how it generates results is unknown. Oftentimes, this is to shocking effect with its eerily accurate predictions, sometimes from scant data – privacy violations notwithstanding. Despite this, it’s sometimes claimed as a great innovation. Finally one can get “objective” judgements and predictions.
The truth is, though, there’s no such a thing as an objective algorithm. AI is only as good as the data that it’s fed, and in some cases, the wrong data leads to a perpetuation of broken systems. For instance, recidivism algorithms are biased towards disadvantaged populations like people of color.
recidivism – a tendency to relapse into a previous condition or mode of behavior, especially to relapse into criminal behaviorMerriam-Webster
In some cases, more privileged, Caucasian individuals are at an advantage given confounding lifestyle correlations tied to outcomes. For instance, an African-American who is considered high in recidivism risk will receive more scrutiny while on parole versus a Caucasian who is not. It’s ironic at times, as the end outcome can be that the African-American will obey the law and the Caucasian ends up back in prison. Despite these inconsistencies, such algorithms are pushed onto judges who don’t know any better, and innocent individuals can be caught in the crossfire (Angwin et al., 2016).
With that in mind, is there reason to panic, shout and protest? Yes and no. While it isn’t always great as I enumerated above, there are some benefits to these technologies, when used properly. An easy example of this duality is in nuclear technologies. It can be used to terrible effect, or used to generate power. AI has shown immense potential, as shown in various technology demos, or real-world implementations. For video games, AI can enable higher perceived visual fidelity with more realistic lighting or resolution upscaling, and such implementations are slowly being added to newer games and are present in next-generation consoles (“RTX. It’s On. Ultimate Ray Tracing and AI,” 2020; Battaglia, 2020). Outside of gaming, you have frame-interpolation for animations, where an algorithm attempts to make it smoother. Traditional animations like Pixar films may suffer visually from attempting this, as seen in this thread on Twitter:
For stop-motion, though, this changes the game completely. It’s much easier to just have an animation filmed at 15 fps than it is to use AI to enable a smoother, 30 fps final product (Boosting Stop-Motion to 60 fps using AI, 2020). And of course, there is the infinite comedic potential, especially with song generation or translations (The Tonight Show Starring Jimmy Fallon, 2020). Outside of that, this has the capability to change the way we work literally. Imagine a richer world where AI assists in creative work, or the formulation of novel chemicals that could change lives (Conti, 2016; Hessler & Baringhaus, 2018). That’s also in the works, as well as the dystopic solutions above.
The key point of this essay is not so much to get you fired up one way or the other about AI, but to just be aware of these systems and changes in our society. Congress has tried, unsuccessfully, to consider it, and it’s our job as the greater public to be informed and act accordingly. Greater awareness can be detrimental, as seen with Brandolini’s Law. However, given how unconscious these processes currently operate, it’s best to bring them to light.
Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. ProPublica. Retrieved 31 December 2020, from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Autopilot AI. Tesla.com. (2020). Retrieved 31 December 2020, from https://www.tesla.com/autopilotAI
Battaglia, A. (2020). PlayStation 5: what to expect from next-gen console ray tracing. Eurogamer.net. Retrieved 31 December 2020, from https://www.eurogamer.net/articles/digitalfoundry-2020-playstation-5-ray-tracing-software-analysis
Boosting Stop-Motion to 60 fps using AI. (2020). [Video]. Retrieved 31 December 2020, from https://www.youtube.com/watch?v=sFN9dzw0qH8
Crimson Mayhem. 2020, October 6. “You want to know why converting animation that were specifically made in 24 frames per second to 60 FPS…”. Twitter. https://twitter.com/Crimson_Mayhem_/status/1313562730977255426
Google AI. Google. (2020). Retrieved 31 December 2020, from https://ai.google/.
Merriam-Webster. (n.d.). Recidivism. Merriam-Webster. Retrieved January 11, 2021, from https://www.merriam-webster.com/dictionary/recidivism
Spot®. BostonDynamics.com. (2020). Retrieved 31 December 2020, from https://www.bostondynamics.com/spot
Stevens, Michael. (2017). Artificial Intelligence – Mind Field (Ep 4) [Video]. Retrieved 31 December 2020, from https://www.youtube.com/watch?v=qZXpgf8N6hs
The Tonight Show Starring Jimmy Fallon. (2020). Google Translate Songs with Halsey [Video]. Retrieved 31 December 2020, from https://www.youtube.com/watch?v=BRZ4zci_YUU
 Additionally, you probably consented to such listening when agreeing to the arcane and long Terms of Service or End User License Agreement for things like Google services.
 As improved as translation services are like DeepL, it can still sound odd to native speakers, as I’ve learned in Chinese with my own family.
 Algorithms designed to predict re-offending inmates after being released on parole
 It takes more information and effort to correct misinformation, especially on the internet.