Artificial Intelligence – IP

My Work

ChatGPT’s Work

Important Figures in AI

Alan Turing

Turing was a British Mathematician and is known as the father of AI, asking whether a machine can think (Turing, 1950). He developed a concept designed to measure computer’s intelligence by having observers compare computer and human-generated results. If human observers view the machine as intelligent, it is deemed intelligent (Turing, 1950).

John McCarthy

McCarthy, a “founding father of AI”, helped coin the term “Artificial Intelligence” and developed the Lisp programming language, which became the language of choice for AI applications (Wikipedia, n.d.). McCarthy (1979, p. 3) viewed anything capable of solving a problem, even a thermometer, as “having beliefs”, which could be interpreted as intelligence by some.

Herb Simon

Simon predicted that “machines will be capable of doing any work a man can do” and ideas about decision making (UBS Nobel Perspectives, n.d.). He viewed thinking as a “form of information processing” and that computers could do this in a similar way to humans by looking for, storing, and applying patterns to make inferences (Frantz, 2003).

Marvin Minsky

Minsky cofounded MIT’s Artificial Intelligence Lab after building one of the first neural-network learning machines, which spurred major AI advancements we see today (BBC News, 2016). He viewed the brain as a machine which can be studied and replicated in a machine to be further studied, leading to higher mental functioning in humans (Frontiers of Knowledge Awards – BBVA Foundation, 2014).

Timnit Gebru

Gebru published two papers on AI ethics some considered controversial. One about the risks of LLMs being “Stochastic Parrots” (Bender, Gebru, McMillan-Major, & Shmitchell, 2021) and one about facial-recognition technology having biases against women and people of color (Buolamwini & Gebru, 2018). She argues that language models are not intelligent because they cannot understand the social context and use biased training data (Bender et al., 2021).

Alan Turing

Alan Turing is considered the father of theoretical computer science and AI. He proposed the “Turing Test” (1950) as a way to identify intelligence—if a machine’s responses were indistinguishable from a human’s in conversation, it could be considered intelligent. His work laid the foundation for computation and machine reasoning.

John McCarthy

John McCarthy coined the term “artificial intelligence” in 1956 and organized the Dartmouth Conference, the birthplace of AI as a field. He developed LISP, a programming language critical for early AI. McCarthy viewed intelligence as the ability to use reasoning, abstraction, and symbolic representation to solve problems flexibly.

Herbert A. Simon

Herb Simon, a Nobel Prize winner, contributed to AI by developing early problem-solving programs like Logic Theorist and General Problem Solver. He believed intelligence could be demonstrated through rational decision-making and problem-solving, mimicking human thought processes. His work bridged psychology, economics, and computer science, shaping cognitive science and AI.

Marvin Minsky

Marvin Minsky, co-founder of MIT’s AI Lab, was a leading theorist of machine intelligence. He viewed the mind as a “society of agents,” where intelligence arises from the interaction of many simple processes. Minsky contributed to robotics, perception, and knowledge representation, emphasizing symbolic reasoning as a core measure of intelligence.

Timnit Gebru

Timnit Gebru is a modern AI researcher known for work on algorithmic bias, ethics, and AI’s social impact. She co-authored the influential “Stochastic Parrots” paper critiquing large language models. Gebru argues intelligence should not just be technical performance but also account for fairness, accountability, and avoidance of harmful biases in AI.

Machine Language vs. Human Language

While human language and machine language appear to be very different on the surface, they also have many similarities. According to Ana Harris (2018), human and machine language both have two fundamental similarities: semantics – “information connected to a certain concept”, and syntax – a set of rules to arrange the words and phrases. For instance, both languages should be written in a specific order with certain words to portray intended meaning. The key difference is that human language allows for error and flexibility, whereas machine language running into an error will stop the program and there cannot be ambiguous language (Harris, 2018). Every word has its role.

Machine (programming) languages are designed to be precise, unambiguous, and interpretable by computers. They rely on strict syntax and logic, where even a small error may prevent execution. Their primary function is efficiency in instructing machines. Human (natural) languages, however, are inherently flexible, ambiguous, and context-dependent. They evolve socially and culturally, allowing for creativity, metaphor, and multiple interpretations. While programming languages aim for clarity and machine control, natural languages prioritize communication among people, adapting meaning through shared context. In essence, programming languages demand absolute precision, while natural languages thrive on nuance, adaptability, and cultural richness.

Machine Intelligence vs. Human Intelligence

Intelligence is a word with many possible definitions, but for the sake of this write-up, I will define it as the ability to learn, retain and apply information in any situation. An example that sticks out to me in Chollet’s On the Measure of Intelligence (2019), is how a human can go into any new environment and figure out how to make a pot of coffee from start to finish. We locate the pot, find and grind the beans if necessary, add water, pack the grounds, and start the brew. At this point, a machine could certainly learn how to make a pot of coffee, but could it have the awareness in a new environment, and if so is this true intelligence?

Machine intelligence, often termed artificial intelligence (AI), is built on algorithms and data-driven computations. It excels at specific, well-defined tasks, such as recognizing patterns, analyzing vast datasets, or generating text, but lacks consciousness, self-awareness, or intrinsic motivation. Human intelligence is holistic, combining logic, intuition, creativity, emotional depth, and ethical reasoning. It allows flexible adaptation across new and unpredictable contexts. Unlike machines, humans can generalize knowledge broadly and embed social, cultural, and moral considerations into decision-making. While AI simulates aspects of intelligence, it does not replicate the full spectrum of human cognition, lived experience, or subjective understanding.

Machine Learning vs. Human Learning

Machines, just like humans, can learn from experience with inputs being accepted, processed and remembered. Though two key differences are apparent in the way machines learn: data fed to the machine may have inherent biases leading to biased outputs (Buolamwini, 2019) and drawing on the learned data doesn’t result in creative thinking or new ideas. While humans can have the desire to learn new things, machines are simply told what data to consume based on an algorithm or human intervention. Machines are less flexible, creative and adaptive than humans at this time, but I envision a future where machines could choose what they learn.

Machine learning is the process by which computers identify patterns and improve performance through exposure to data, usually via mathematical models and statistical optimization. It requires large datasets, labeled examples, and repeated training cycles to achieve accuracy. Human learning, by contrast, is adaptive, experiential, and multifaceted—people can learn from few examples, abstract concepts, or even imagination. Humans also integrate emotions, prior experiences, and cultural knowledge when learning. While machine learning is narrowly task-focused and limited by the data provided, human learning is broader, creative, and capable of generalizing across domains with minimal information.

Does ChatGPT Pass the Turing Test?

ChatGPT passes the Turing Test for the majority of humans, with capabilities of tricking the uninformed, but I believe the trained eye can see that it is a machine. The typical response is formal and very regimented, with common patterns. It uses this format: “Machine learning is…” followed by a few sentences then “Human learning is…” proceeded by more details, concluding with differences. It does not use references sufficiently nor does it stray far from this structure. As LLM’s learn from what they take in from scouring the web and being fed every piece of literature ever created, it recognizes the prompt I inputted as requesting an academic style output, which shows ability to “understand” context. But the lack of creativity and flexibility shows with approach in each response. The responses in the biographical section, however, were more similar to mine and are less easy to recognize as ChatGPT-generated. This may be due to the length constraint of 50 words and the “matter of fact” nature of biographies, whereas the nature of the language, intelligence, and learning is more ambiguous. Based on these insights, I will conclude that ChatGPT has not quite passed the Turing test, and will not until everyone is fooled by it, rather than a subset of people.

References

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460.

McCarthy, J. (1979). Ascribing mental qualities to machines (33 pp.). Stanford, CA: Stanford University, Computer Science Department. Retrieved from http://www-formal.stanford.edu/jmc/ascribing.pdf

Wikipedia contributors. (n.d.). John McCarthy (computer scientist). In Wikipedia. Retrieved September 18, 2025, from https://en.wikipedia.org/wiki/John_McCarthy_(computer_scientist)

UBS Nobel Perspectives. (n.d.). Herbert Simon. UBS. Retrieved September 18, 2025, from https://www.ubs.com/herbert-simon

Frantz, R. (2003). Herbert Simon: Artificial intelligence as a framework for understanding intuition. Journal of Economic Psychology, 24(2), 265–277. https://doi.org/10.1016/S0167-4870(02)00207-6

BBC News. (2016, January 26). AI pioneer Marvin Minsky dies aged 88. BBC. https://www.bbc.com/news/technology-35409119

Frontiers of Knowledge Awards – BBVA Foundation. (2014, January 14). Marvin Minsky, founding father of artificial intelligence, wins the BBVA Foundation Frontiers of Knowledge Award in Information and Communication Technologies. BBVA Foundation. https://www.frontiersofknowledgeawards-fbbva.es/noticias/marvin-minsky-founding-father-of-artificial-intelligence-wins-the-bbva-foundation-frontiers-of-knowledge-award-in-information-and-communication-technologies/

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21) (pp. 610-623). ACM. https://doi.org/10.1145/3442188.3445922

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research, 81, 77-91. https://proceedings.mlr.press/v81/buolamwini18a.html

Harris, A. (2018, November 1). Human languages vs. programming languages. Medium. https://medium.com/@anaharris/human-languages-vs-programming-languages-c89410f13252

Chollet, F. (2019, November 5). On the measure of intelligence (arXiv:1911.01547). arXiv. https://arxiv.org/abs/1911.01547

Buolamwini, J. (2019, February 7). Artificial intelligence has a problem with gender and racial bias. Here’s how to solve it. TIME. https://time.com/5520558/artificial-intelligence-racial-gender-bias/

OpenAI. (2025, September 22). ChatGPT shared conversation: “68d0e086-ea68-8005-b22c-28d3bda63865” [Large language model conversation]. https://chatgpt.com/share/68d0e086-ea68-8005-b22c-28d3bda63865

Leave a Comment

Your email address will not be published. Required fields are marked *