Call for Papers: Special Issue: Civilizing and Humanizing AI
• 大类 : 计算机科学 - 3区
• 小类 : 计算机：软件工程 - 3区
The emergence of large language/foundation models (LLMs/LFMs) such as GPT, Stable Diffusion, DALL-E, and Midjourney has dramatically altered the trajectory of progress in AI and its applications. The enthusiasm for AI has expanded beyond the realm of AI researchers and has reached the general population; indeed, it asserts we are living in an exciting time of scientific proliferation. The present-day capability of AI exhibits promising forms of intelligence on the spectrum of individualized to generalized intelligence, but it also possesses unexpected limitations and is susceptible to significant misuse. AI’s “eloquence” has reached a level where discerning AI-generated content for a human, be it in text, images, or videos, has become notably challenging. We refer to this as the “eloquence” characteristic.
Conversely, the worrisome rise of hallucinations of AI models raises credibility issues. We refer to this as “adversity” characteristics. Recently, the governments of the United States and the European Union have put forth their preliminary proposals concerning the regulatory framework for the safety of AI-powered systems. AI systems that adhere to these regulations in the future will be referred to by a recently coined term, “Constitutional AI”. The primary objective of regulatory frameworks is to establish safeguards against the misuse of AI systems. In the event of misuse, these frameworks aim to impose penalties on the individuals, groups, and/or organizations responsible for such misconduct. The effective implementation of these regulatory frameworks demands the design of processes and tools for civilizing and humanizing AI. “Civilizing AI” embodies a nuanced equilibrium between the machine’s eloquence and its inclination towards adversarial behavior. Complementing it, “Humanizing AI” (borrowed in part from Humanity-inspired AI) embodies the characterization of human expectations for benefits and risks of adopting AI systems in society, given the machine’s eloquence and adversarial behavior. As AI systems increasingly take the place of a human (e.g., an autopilot driving a vehicle, a virtual assistant to diagnose or counsel a patient), humanizing AI aims to subject an AI system to the same behavior and expectations that we expect humans (e.g., driver, a health professional) to abide by. This includes subjecting an AI system to ethics, socio-cultural norms, policies, regulations, laws, and values in alignment with that expectation from such a human actor.
We seek articles that address the two themes; representative topics include:
- Methods and frameworks for civilizing and humanizing AI
- Identifying and managing AI’s risk to individuals and society
- Detection of AI-generated content
- Modeling ethics, biases, accountability, and autonomy in AI systems
- Learning and reasoning for social norms and values
- Making AI models responsible and accountable
- Mitigating harmful hallucinations
- Building guardrails based on policy, regulations, and laws
- Adapting pretrained LLMs for individual and social context
- Incorporating cognitive models in AI models
- Cultural biases and mitigation techniques for LLMs
Prospective authors can send an abstract to firstname.lastname@example.org for feedback on the fit to this special issue.