Artificial intelligence allows for alternative ways to tackle collective challenges. In particular, the United Nations Sustainable Development Goals (SDGs) (see and the Leave No One Behind Principle (LNOB) (see are an urgent call for action where the scientific community has an important role to play.

This special track is dedicated to research triggered by real-world key questions, is carried out in active collaboration with civil society stakeholders, and directs AI research towards the advancement of the UN SDGs and LNOB. The track focuses on AI research that aims to have a positive impact on current global and local challenges while strengthening the civil society-science-policy interface. Multidisciplinary research, including computer, social and natural sciences, as well as multilateral collaborations with non-profits, community organizations, entrepreneurs and governmental agencies, is, therefore, an essential characteristic of the submissions to this track.

We invite two types of contributions: research papers and research project proposals. The authors who wish to submit demos that are relevant to the theme of AI and Social Good are invited to submit them via the IJCAI 2024 Demo track (see below).

Three primary selection criteria for all AI and Social Good submissions will be 1) the scientific quality of the work and contribution to AI and social or natural sciences state of the art, 2) the relevance and real impact on the UN SDGs and 3) the collaboration with civil society stakeholders that have first-hand knowledge of the topic. We are looking for bottom-up approaches in which civil society organizations are not being observants, but active shapers of the proposals. All submissions should clearly state WHAT real-world problem is being tackled, WHO is participating in the project / paper (such as AI grassroot movement organizations, entrepreneurs, policy makers, community leaders, non-profits and universities), HOW is the topic being handled and the results are measured (metrics).

Also, just as the research papers submitted to the main track of IJCAI 2024, papers in this track should be anonymous. Unlike for papers in the main track, there will be no author rebuttal, and no summary reject phase. Accepted research papers in the AI for Good track will be included in the IJCAI proceedings. An award will be given to honor outstanding research papers in this track.

Submission of Research Papers

Research papers should have the same format  (7 pages + 2 pages for references) and follow the same general instructions as for the main conference (see Technical appendices (which can include, but are not restricted to, datasets and code) are allowed. Papers are expected to satisfy the highest scientific standards, just like regular submissions to the main track of IJCAI 2024. In addition, the research papers in this track are expected to provide multidisciplinary scientific contributions towards the advancement of the UN SDGs, document the active involvement of non-profits, community organizations and governmental agencies. The presentation of case studies is highly encouraged in this track as a way to demonstrate the societal impact of the paper and provide examples of how to translate global goals into local actions.

Also, just as the research papers submitted to the main track of IJCAI 2024, papers in this track should be anonymous. Unlike for papers in the main track, there will be no author rebuttal, and no summary reject phase. Accepted research papers in the AI for Good track will be included in the IJCAI proceedings. An award will be given to honor outstanding research papers in this track.

Submission of Research Project Proposals

This specific mechanism of the AI and Social Good Track goes beyond the publication of a paper and aims to generate long-term and productive collaborative lines of work with focus on real-world challenges within the UN SDGs framework. Research project proposals are expected to connect the dots between NGOs, academic research and governmental agencies. Submissions in this category can vary from incipient project ideas to projects under development, as long as they are based on in-the-field know-how, have a multi-lateral teamwork approach and a clear implementation plan towards achieving societal impact.

Research project proposals are not necessarily envisioned to present research results in this edition of IJCAI. Still, selected proposals will be expected to report on their progress during the three subsequent editions of IJCAI (2025 – 2026) and submit a research paper to one of these conference editions.

We suggest the following structure for research project proposals:  problem statement definition (including how the goals of the project have been defined in active collaboration with civil society organizations), strategy, methods, foreseen case studies, expected results and impact on UN SDGs, evaluation criteria, challenges and limitations, ethical considerations, implementation plan and needs, scalability and economic sustainability of the solution, project team description.  The selection criteria will include the team’s multidisciplinary expertise concerning the challenge and technology, feasibility of the project implementation plan, the potential impact on real-world local challenges within a global framework (UN SDGs), the scalability and economic sustainability of the solution and the contribution to state of the art in AI, social or natural sciences. Research project proposals should follow the same format (7 pages + 2 pages of references) and general instructions as the main track submissions ( with the following exception: unlike papers submitted to the main track, research project papers must not be anonymous and must include a 1-page appendix not included in the page count, with short CVs of all team members. In addition, technical appendices (which can include, but are not restricted to, datasets and code) are allowed. Unlike for papers in the main track, there will be no author rebuttal, and no summary reject phase. Accepted research project proposals will be published in the IJCAI 2024 proceedings, just like traditional technical papers. An award will be given to honor outstanding research project proposals in this track.

Submission of Demos

Authors will not be able to submit demos directly to the AI and Social Good track. However, they are invited to submit demos relevant to this special track’s topic to the IJCAI 2024 Demo track, indicating the “AI and Social Good” nature of the demo within the submission procedure.

Important Dates:

  • Submission site opening: January 1, 2024
  • Paper submission deadline: February 22, 2024
  • Notification of acceptance/rejection: April 26, 2024

All deadlines are Anytime on Earth.

Formatting guidelines: LaTeX styles and Word template:

Submission site: papers should be submitted to by choosing “AI for Good” from the drop-down menu.

Track chairs:

Georgina Curto Rex
Girmaw Abebe Tadesse
Nitin Sawhney
Sibusisiwe Audrey Makhanya
Avishkar Bhoopchand

Enquiries: the track chairs can be reached at

Clarification on Large Language Model Policy LLM

In line with IJCAI 2024 policy, we (Program Chairs) want to make the following statement with respect to the use of LLM for papers to the AI for Social Good track:

Authors that use text generated by a large-scale language model (LLM) such as ChatGPT should state so in the paper. They are responsible for the complete text and the theoretical and factual correctness thereof. This includes references to other papers and appendices. They are also responsible to make sure text is not plagiarized by using these LLM’s.

We would like to clarify further the intention behind this statement and how we plan to implement this policy for the AI for Social Good track at IJCAI 2024.;


During the past few years, we have observed and been part of rapid progress in large-scale language models (LLM), both in research and deployment. This progress has not slowed down but only sped up during the past months. As many, including ourselves, have noticed, LLMs are now able to produce text snippets that are often difficult to distinguish from human-written text. Undoubtedly this is exciting progress in natural language processing and generation.

Such rapid progress often comes with unanticipated consequences as well as unanswered questions. There is, for instance, a question on whether text, as well as images generated by large-scale generative models, are considered novel or mere derivatives of existing work. There is also a question on the ownership of text snippets, images or any media sampled from these generative models. It is certain that these questions, and many more, will be answered over time, as these large-scale generative models are more widely adopted. However, we do not yet have any clear answers to any of these questions.

Since how we answer these questions directly affects our reviewing process, which in turn affects members of our research community and their careers, we want to make known our position in considering this new technology. We decided to not prohibit producing/generating text using large-scale language models this year (2024), but rather give the responsibility to the authors, so that the ideas of the paper must always be clearly and completely those of the authors. In addition, if LLMs are used, authors should clearly communicate it in the article, stating the purposes for which LLMs have been used. This decision will be revisited for future iterations of the AI for Social Good track.

Implementation and Enforcement

As we are well aware, it is difficult to detect whether any given text snippet was produced by a language model. The AI for Social Good PC team does not plan to implement any automated or semi-automated system to be run on submissions to check for the LLM policy this year (2023). Instead, we plan to investigate any potential violation of the LLM policy when a submission is brought to our attention with a significant concern about a potential violation. Any submission flagged for the potential violation of this LLM policy will go through the same process as any other submission flagged for plagiarism.

As we learn more about the consequences and impacts of LLMs in academic publications, and as we redesign the LLM policy in future conferences (after 2024), we will consider different options and technologies to implement and enforce the latest LLM policy in future iterations.