By Chinedu Ugo Geroge Nnaji
Abstract: Artificial intelligence models are shaped not only by algorithms and data, but also by the cultural environments and values in which they are created. This paper presents a condensed comparative study of three major AI chat models – OpenAI’s ChatGPT, China’s DeepSeek, and the Perplexity AI assistant – to illustrate how cultural influences, technological design choices, and ethical/regulatory considerations intertwine in AI development. We find that societal values and norms can introduce biases in model training and alignment (e.g. open discourse vs. censorship), technical approaches vary from massive all-purpose models to efficiency-driven or retrieval-augmented systems, and differing ethics/governance regimes lead to distinct content policies. Through a structured analysis, we highlight how each model’s functionality and behavior reflects the priorities and trade-offs of its origin – from ChatGPT’s Western liberal-tuned dialog, to DeepSeek’s state-aligned responses, to Perplexity’s fact-focused, tool-assisted answers. The conclusion underscores the embedded biases in AI design and suggests future directions for creating more culturally inclusive and ethically governed AI systems