Brain Activity

Waste not, want not

I made a promise to myself to write every week and I have kept it. However, this week I spent the weekend writing a term paper for my philosophy 101 class. So, as promised dear readers, a Sunday writing.

What is the value of philosophy in the contemporary world?

Wisdom derived from centuries of philosophical study has been invaluable to humankind’s assessment of itself as a species on the earth, and that value has not lessened in our modern era. As our lives are increasingly shaped by the technology we use, our mindset about where that technology falls in the importance of things becomes critical to our autonomy and sense of agency as we operate in the world. Existing artificial intelligence models and shockingly accurate predictive algorithms utilize past behavior patterns to prime and persuade users to alter their future behavior in favor of a specified result. Wearable fitness watches monitor mobility and alert the user to stand up and move when they’ve been sedentary for too long, a feature helpful to those concerned about their health. What if the same type of technology is applied to advertising a product? For example, an app monitors a user’s activity and displays an ad for a weight loss drug after a search for low calorie dinner recipes. One could argue using predictive technology in this way no longer benefits the user, rather it benefits the drug company, often with data acquired through ethically ambiguous means. This algorithmic overreach becomes an issue of privacy, and the technologies intended use is muddled.

“Recommendation systems, search, language translators–now covering more than one hundred languages–facial recognition, speech to text (and back), digital assistants, chatbots for customer service, fraud detection, decision support systems, energy management systems, and tools for scientific research” are just a few examples of artificial intelligence we might encounter in our daily lives. (Manyika 2022) Additionally, People are turning to artificially intelligent assistants and schedulers to determine how to best spend the hours of their day. A scheduling app that prioritizes productivity over mental and physical health produces a much different output for the user. Where does the responsibility lie for the health and well-being of the human using the product? Should we always assume the human is in complete control? One might argue that the entity with computational pattern recognition has the upper hand in some cases. These questions should be addressed by the developers of the technology ahead of its release to the public.

Philosophy is well suited as a starting point for creating guidelines for new and existing technology that marry commerce and science with morality, logic, ethics, and theological beliefs. All of which are important considerations as we fold artificial intelligence and learning algorithms into our human lives. The moral and ethical implications of this powerful technology are far reaching and it’s important we create thoughtful policies to protect and defend the humans effected by oversights and omissions during the design phase. Philosophical study can help draw lines through the gray areas of this broad field and can provide a stable ground to weigh the consequences of what we are capable of scientifically, provide protection against corruption of power, and eliminate the restrictive guidance of religious dogma in secular applications. Of particular personal interest is the effect linguist philosophies could have on the language-based operation of these technologies. Algorithmic prejudice is a real problem with the current systems. (Cataleta 2020) Inclusive, comprehensive large language models are needed to prevent social discrimination and to account for economic and cultural differences among all users. Critical analysis of generalizations coded into these programs could go far in reducing the occurrence of unintended discrimination.

At the same time technology increases its presence in the minutia of our daily lives, the economic and cultural divides in the country are widening. At one end of the economic spectrum, users can set up bots to mine currency for the cost of the electricity it takes to run the machine, and at the other end people are fighting against automation to secure well-paying jobs. Access to technological resources and education about how to best utilize these new tools are critical to flourish in a tech-focused society. Until recently, new technology has been designed and built to specifications that benefit its creators. Those unlike the creators of the technology may have a different experience. Often this effects the groups frequently marginalized. Minorities are disadvantaged, as seen with facial recognition software making less precise identifications of black faces simply because the sample set used to train the artificial intelligence did not include enough diversity. (Cataleta 2020) In this case an oversight in the models training created real world discrimination.

Given the data sets provided, artificial intelligence reaches its own conclusion about exactly how to achieve its goal (also assigned by developers). The tech is incentivized to achieve its desired result in the most efficient way possible, only considering negative side effects if the programmers account for them. “AI has the potential to be used for many socially beneficial purposes, there is concern about dangerous and problematic uses of the technology, which has prompted a global conversation on the normative principles to which AI ought adhere.” (Daly 2021) Since artificial intelligence is only aware of the information it has been provided by developers, bias and discrimination have a high probability of being built into the system and the current approach leaves the ethical principles open to the interpretation of individual developers. A preferable approach would be to create “public policies that maximize the benefits of AI, while minimizing its potential costs and risks” and require developers and programmers to adhere to these policies as they train their models. (Englekle 2024)

Philosophy allows a space for careful reflection on more recent changes to society, and more broadly, human interactions with one another and with nature. Since the introduction of artificial intelligence, humanity has changed a lot. Slowing down the release of unexamined technologies and spending more time in the development stage while awaiting a thorough philosophical dissection of the potential implications, both positive and negative, is imperative with the level of power artificial intelligence promises. It is important to first determine the essence of what we hope the technology can do for us. Asking probing questions, creating clear concepts and defining boundaries to preserve human rights are critical steps toward safe and ethical design. Philosophy can and should help mitigate the effects of multiplying our own ignorance.

That’s it. The last line is a little dark. I might change that up. If there are any glaring errors, please leave me a comment. I have until Wednesday to submit it.


Sources:

Manyika, James. “Getting AI Right: Introductory Notes on AI & Society.” Daedalus, vol. 151, no. 2, 2022, pp. 5–27. JSTOR, https://www.jstor.org/stable/48662023. Accessed 8 Dec. 2024.

Cataleta, Maria Stefania. Humane Artificial Intelligence: The Fragility of Human Rights Facing AI. East-West Center, 2020. JSTOR, http://www.jstor.org/stable/resrep25514. Accessed 8 Dec. 2024.

Daly, Angela, et al. “AI Ethics Needs Good Data.” AI for Everyone?: Critical Perspectives, edited by Pieter Verdegem, University of Westminster Press, 2021, pp. 103–22. JSTOR, http://www.jstor.org/stable/j.ctv26qjjhj.9. Accessed 8 Dec. 2024.

Engelke, Peter. AI, Society, and Governance: An Introduction. Atlantic Council, 2020. JSTOR, http://www.jstor.org/stable/resrep29327. Accessed 8 Dec. 2024.

Leave a comment