Resumen
En este estudio se analiza cómo los algoritmos de inteligencia artificial integrados en la vida cotidiana influyen en la toma de decisiones humanas y en la configuración de la subjetividad social. Se adoptó un enfoque cuantitativo basado en simulación socio-técnica, utilizando datos sintéticos y modelos de decisión para evaluar el impacto de la personalización algorítmica, la diversidad de exposición y la explicabilidad del sistema. Los resultados muestran que la personalización incrementa de forma sistemática la probabilidad de adhesión a las recomendaciones, reduce la diversidad estructural del entorno de elección y modula variables subjetivas como la agencia percibida y la dependencia algorítmica. El análisis de sensibilidad evidencia que la personalización actúa como un parámetro de alta sensibilidad, generando cambios predecibles y estables en la conducta de elección. Estos hallazgos confirman que los algoritmos cotidianos no solo optimizan decisiones, sino que reconfiguran progresivamente la experiencia decisional humana.
Citas
[2] M. Jesse and D. Jannach, “Digital nudging with recommender systems: Survey and future directions,” Computers in Human Behavior Reports, vol. 3, p. 100052, 2021, doi: 10.1016/j.chbr.2020.100052.
[3] D. Kahneman and A. Tversky, “Prospect theory: An analysis of decision under risk,” Econometrica, vol. 47, no. 2, pp. 263–291, 1979, doi: 10.2307/1914185.
[4] H. E. Chapman and A. Abraham, “Because you watched: How do streaming services’ recommender systems influence aesthetic choice?” Behavioral Sciences, vol. 15, no. 11, p. 1544, 2025, doi: 10.3390/bs15111544.
[5] V. Jylhä, N. Hirvonen, and J. Haider, “Algorithmic recommendations in the everyday life of young people: Imaginaries of agency and resources,” Information, Communication & Society, 2025, doi: 10.1080/1369118X.2025.2470227.
[6] C. F. Unruh et al., “Human autonomy in algorithmic management,” in Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 2022, doi: 10.1145/3514094.3534168.
[7] K. Zielnicki, G. Aridor, A. Bibaut, A. Tran, W. Chou, and N. Kallus, “The value of personalized recommendations: Evidence from Netflix,” arXiv, 2025, doi: 10.48550/arXiv.2511.07280.
[8] S. Barocas and A. D. Selbst, “Big data’s disparate impact,” California Law Review, vol. 104, pp. 671–732, 2016, doi: 10.15779/Z38BG31.
[9] M. T. Ribeiro, S. Singh, and C. Guestrin, “Why should I trust you?: Explaining the predictions of any classifier,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2016, doi: 10.1145/2939672.2939778.
[10] R. Guidotti et al., “A survey of methods for explaining black box models,” arXiv, 2018, doi: 10.1145/3236009.
[11] S. Amershi et al., “Guidelines for human-AI interaction,” in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 2019, doi: 10.1145/3290605.3300233.
[12] M. De-Arteaga, R. Fogliato, and A. Chouldechova, “A case for humans-in-the-loop: Decisions in the presence of erroneous algorithmic scores,” Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pp. 1–12, 2020, https://doi.org/10.1145/3313831.3376638.
[13] E. K. Lee, M. R. Shaffer, and J. A. Konstan, “Algorithmic mediation and human decision-making: A systematic review,” ACM Computing Surveys, vol. 55, no. 7, pp. 1–36, 2023, doi: 10.1145/3565967.
[14] F. Ricci, L. Rokach, and B. Shapira, Recommender Systems Handbook, 3rd ed. New York, NY, USA: Springer, 2022, doi: 10.1007/978-1-0716-2197-4.
[15] C. Castelluccia and D. Le Métayer, “Understanding algorithmic decision-making: Opportunities and risks,” Philosophical Transactions of the Royal Society A, vol. 376, no. 2133, Art. no. 20180092, 2018.
[16] J. Grgić-Hlača, M. Redmiles, K. P. Gummadi, and A. Weller, “Human perceptions of fairness in algorithmic decision making,” Proceedings of the 2018 World Wide Web Conference (WWW), pp. 903–912, 2018, doi: 10.1145/3178876.3186138.
[17] R. Parasuraman and V. Riley, “Humans and automation: Use, misuse, disuse, abuse,” Human Factors, vol. 39, no. 2, pp. 230–253, 1997, doi: 10.1518/001872097778543886.
[18] T. Miller, “Explanation in artificial intelligence: Insights from the social sciences,” Artificial Intelligence, vol. 267, pp. 1–38, 2019, doi: 10.1016/j.artint.2018.07.007.
[19] J. W. Brehmer, “Dynamic decision making: Human control of complex systems,” Acta Psychologica, vol. 81, no. 3, pp. 211–241, 1992, doi: 10.1016/0001-6918(92)90019-A.
[20] P. Mohri, A. Rostamizadeh, and A. Talwalkar, Foundations of Machine Learning, 2nd ed. Cambridge, MA, USA: MIT Press, 2018.

