In this day and age, artificial intelligence (AI) is a major media and society interest. From its origins to the present, AI has seen its share of ups and downs in public interest. AI applications can be impactful in daily routine.
AI for the social good:
AI can be a major force for social good if the technology is shaped according to the data available. Currently there is a significant spotlight on the future ethical, safety and legal concerns of future applications of AI.
AI technology can be leveraged to move from descriptive models (data analytics) to predictive ones (machine learning) to prescriptive decisions (optimization, game theory, and mechanism design). AI enables us to go from “data to decision” in urban computing. With the data collections now happening at this scale to aid in decision-making, it is important to also consider the privacy implications around the data.
Current research and applications in AI for sustainability can be organized in terms of data, modeling, decision making, and monitoring. The goal is to manage ecosystems with policies that are based on the high quality data and science.
Social Issues AI can help address:
There are many social issues AI could contribute improvements to. Some examples are:
Justice: Identify, target, and prevent individuals who are likely to cycle through various public systems like emergency rooms, homeless shelters for example and eventually end up in the criminal justice system. To understand what factors best predict interactions with these systems and develop interventions so employees of these systems may reduce future interactions while providing quality services.
Economic Development: To allocate resources that a city has towards the homes, neighborhoods, and communities that are most likely to help reduce wastage.
Workforce Development: To help with Job Training and Skills Development programs to figure out what skills are going to be in demand in the future so they can train individuals and help them become employable.
Public Safety: To make dispatch decisions for emergency response calls, send for a given dispatch and ensure to send the appropriate resources without overspending.
Policing: To identify police officers that are at risk of adverse incidents with the public in order to match them appropriate preventative interventions.
Education: Build a system to help target early and effective interventions at students who may need extra support to graduate on time or not likely to apply to college or not ready for college or careers upon graduation from high school.
Gaps and Barriers:
Work in this area requires deep, intimate, and sustained interaction and efforts between the target community and AI researchers. Some of the current gaps and barriers are as follows:
Lack of Experienced Collaborators: There isn’t an established history of AI working in this area. As a result, there isn’t a ready supply of trained AI researchers (or practitioners) who are familiar with the unique aspects of working on public welfare problems. Conversely, government and policymakers have little experience working directly with the research community. Highlighting ongoing projects (and successes) to both raise awareness and to provide a roadmap is essential to growing this community. Also critical is training both sides on how to scope and formulate problems and projects that result in effective collaborations and impact.
Lack of Visible Activity and Case Studies: Building on the previous point, the level of activity in this space is far lower than the needs. This lack of activity makes it difficult for governments and policymakers to know what’s possible when thinking about the uses of AI in their work. Increasing projects in this area is a “retail” problem, as pilot projects will inevitably be local and therefore shaped by the unique context and capabilities of the municipality and research group. Finding funding mechanisms that address local needs – e.g. the NSF Data Hubs model – is essential. Seeding lots of small prototype projects is also critical at this stage.
Lack of Reusable Infrastructure: While AI tools are increasingly available to a broad set of researchers, the underlying platforms to support them, within a context of public good, is missing. To continue the previous example, identifying at-risk populations will require access to data sets such as tax records, police records, education data, and healthcare data. Platforms that are able to access, aggregate, and curate such data sets do not exist; this is an enormous barrier to progress. Tools that build on such data sets – for example, basic methods for federation, inferences, and so forth can only be meaningfully developed once such infrastructure tools are available.
Legal, Regulatory, Compliance: No list of barriers would be complete without acknowledging that there are many legal and regulatory hurdles for many of these projects. Access to data, and to populations to evaluate against will require substantial investment of time, planning, and resources to have an effect. Creating frameworks for ethical evaluation of costs and benefits must be established. Understanding the impact of innovations will require an understanding of the level of compliance, and possibly methods to manage or pivot solutions in response to perception, trust, and compliance of the target population.
There are innumerable opportunities to advance work in AI for public welfare, for example:
◗ Better data collection, digitization, and curation, particularly around urgent priorities. ◗ Better federation and integration of data sources currently not being used together. ◗ Better models and predictions of individual behaviors to support existing interventions ◗ Better evaluation of existing and historical policies to understand their implications vis-a-vis enablement of AI advances.
The opportunities and possible prospects of AI are immense and it can be a successful venture if the technology is used keeping in mind the past, present and the future.