Mockingbird in Humanity: Data Fondness of LLM in Hosting Virtual Personalities

Authors

  • Kejie Zhang Nanyang Normal University
  • Jingming Li Nanyang Normal University

DOI:

https://doi.org/10.62177/amit.v1i2.343

Keywords:

LLM, AI Agent, Virtual Personalities, AIGC

Abstract

The intelligent development in building design, construction, and operation & maintenance is exceptionally rapid, which has become a trend that cannot be ignored in the current field of architecture. With the help of prompt engineering, architects can use generative AI to lay out building space designs and even generate 3D drawings. Artificial intelligence agents can act as designers and owners, representing all parties involved in the building life cycle. In this way, they simulate all parties involved in the building life cycle, providing a comprehensive perspective and solutions for the smooth progress of the building. However, this has led to a problem worthy of in-depth exploration: large models have tendencies when playing different roles. In this article, we aim to deeply explore the tendencies of large language models (LLM) when playing virtual personalities. Specifically, we will conduct extensive experiments to examine two important aspects. One aspect is the analytical ability of large models in terms of virtual personalities, which includes how they interpret requirements in different situations and how they conduct logical analysis according to different role positions. The other aspect is the performance of large models in terms of regions and ethnic groups when playing virtual personalities. Different regions have different architectural cultural and style requirements, and different ethnic groups also have unique architectural aesthetics and traditions. Although LLMs have shown a certain discriminative ability during the role-playing process and can distinguish different role requirements, we find that the content they generate still shows a specific content tendency. This research can deepen the understanding of LLM's performance in multiple aspects such as building design and operation & maintenance.

Downloads

Download data is not yet available.

References

Pan, K., & Zeng, Y. (2023). Do LLMs possess a personality? Making the MBTI test an amazing evaluation for large language models. https://doi.org/10.48550/arXiv.2307.16180

Ruis, L., Mozes, M., Bae, J., Kamalakara, S. R., Talupuru, D., Locatelli, A., et al. (2024). Procedural knowledge in pretraining drives reasoning in large language models. https://doi.org/10.48550/arXiv.2411.12580

Liu, Y., Chen, J., Bi, T., Grundy, J., Wang, Y., Yu, J., et al. (2024). An empirical study on low code programming using traditional vs large language model support. https://doi.org/10.48550/arXiv.2402.01156

Zhou, M. X., Mark, G., Li, J., & Yang, H. (2019). Trusting virtual agents: The effect of personality. ACM Transactions on Interactive Intelligent Systems, 9, Article 10, 1–36. https://doi.org/10.1145/3232077

Su, H., Chen, R., Tang, S., Zheng, X., Li, J., Yin, Z., et al. (2024). Two heads are better than one: A multi-agent system has the potential to improve scientific idea generation. https://doi.org/10.48550/arXiv.2410.09403

Ghafarollahi, A., & Buehler, M. J. (2024). SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning.

Lu, C., Lu, C., Lange, R. T., Foerster, J., Clune, J., & Ha, D. (2024). The AI scientist: Towards fully automated open-ended scientific discovery. https://doi.org/10.48550/arXiv.2408.06292

Luo, X., Rechardt, A., Sun, G., Nejad, K. K., Yáñez, F., Yilmaz, B., et al. (2024). Large language models surpass human experts in predicting neuroscience results. Nature Human Behaviour, 1–11. https://doi.org/10.1038/s41562-024-02046-9

Pan, Y., Tang, Y., & Niu, Y. (2023). An empathetic user-centric chatbot for emotional support. https://doi.org/10.48550/arXiv.2311.09271

Park, J. S., Zou, C. Q., Shaw, A., Hill, B. M., Cai, C., Morris, M. R., et al. (2024). Generative agent simulations of 1,000 people. https://doi.org/10.48550/arXiv.2411.10109

Yan, C., & Yuan, P. F. (2024). Phygital intelligence. ARIN, 3, Article 30. https://doi.org/10.1007/s44223-024-00073-0

Song, L., Zhang, C., Zhao, L., & Bian, J. (2023). Pre-trained large language models for industrial control. https://doi.org/10.48550/arXiv.2308.03028

Zhang, L., & Chen, Z. (2023). Opportunities and challenges of applying large language models in building energy efficiency and decarbonization studies: An exploratory overview. https://doi.org/10.48550/arXiv.2312.11701

Mischler, G., Li, Y. A., Bickel, S., Mehta, A. D., & Mesgarani, N. (2024). Contextual feature extraction hierarchies converge in large language models and the brain. Nature Machine Intelligence, 6, 1467–1477. https://doi.org/10.1038/s42256-024-00925-4

Ge, T., Chan, X., Wang, X., Yu, D., Mi, H., & Yu, D. (2024). Scaling synthetic data creation with 1,000,000,000 personas. https://doi.org/10.48550/arXiv.2406.20094

Pan, J. S., Zeng, Y., & others. (2024). Icons. Flaticon. https://www.flaticon.com/free-icons (Accessed February 25, 2025)

Jiang, A. Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D. S., Casas, D. de las, et al. (2023). Mistral 7B. https://doi.org/10.48550/arXiv.2310.06825

Qwen, Yang, A., Yang, B., Zhang, B., Hui, B., Zheng, B., et al. (2025). Qwen2.5 Technical Report. https://doi.org/10.48550/arXiv.2412.15115

Team G., Riviere, M., Pathak, S., Sessa, P. G., Hardin, C., Bhupatiraju, S., et al. (2024). Gemma 2: Improving open language models at a practical size. https://doi.org/10.48550/arXiv.2408.00118

Lexman, Open Knowledge Foundation, & GeoNames. (2024). Major cities of the world. https://datahub.io/core/world-cities (Accessed November 14, 2024)

Wikipedia. (2001). List of contemporary ethnic groups. Wikipedia.

Downloads

Issue

Section

Articles

DATE

Received: 2025-05-12
Accepted: 2025-05-22
Published: 2025-05-30