The Future of AI: How Artificial Intelligence Will Change the World

future of ai technology essay

Innovations in the field of  artificial intelligence continue to shape the future of humanity across nearly every industry. AI is already the main driver of emerging technologies like big data, robotics and IoT, and  generative AI has further expanded the possibilities and popularity of AI. 

According to a 2023 IBM survey , 42 percent of enterprise-scale businesses integrated AI into their operations, and 40 percent are considering AI for their organizations. In addition, 38 percent of organizations have implemented generative AI into their workflows while 42 percent are considering doing so.

With so many changes coming at such a rapid pace, here’s what shifts in AI could mean for various industries and society at large.

More on the Future of AI Can AI Make Art More Human?

The Evolution of AI

AI has come a long way since 1951, when the  first documented success of an AI computer program was written by Christopher Strachey, whose checkers program completed a whole game on the Ferranti Mark I computer at the University of Manchester. Thanks to developments in machine learning and deep learning , IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997, and the company’s IBM Watson won Jeopardy! in 2011.  

Since then, generative AI has spearheaded the latest chapter in AI’s evolution, with OpenAI releasing its first GPT models in 2018. This has culminated in OpenAI developing its GPT-4 model and ChatGPT , leading to a proliferation of AI generators that can process queries to produce relevant text, audio, images and other types of content.   

AI has also been used to help  sequence RNA for vaccines and  model human speech , technologies that rely on model- and algorithm-based  machine learning and increasingly focus on perception, reasoning and generalization. 

How AI Will Impact the Future

Improved business automation .

About 55 percent of organizations have adopted AI to varying degrees, suggesting increased automation for many businesses in the near future. With the rise of chatbots and digital assistants, companies can rely on AI to handle simple conversations with customers and answer basic queries from employees.

AI’s ability to analyze massive amounts of data and convert its findings into convenient visual formats can also accelerate the decision-making process . Company leaders don’t have to spend time parsing through the data themselves, instead using instant insights to make informed decisions .

“If [developers] understand what the technology is capable of and they understand the domain very well, they start to make connections and say, ‘Maybe this is an AI problem, maybe that’s an AI problem,’” said Mike Mendelson, a learner experience designer for NVIDIA . “That’s more often the case than, ‘I have a specific problem I want to solve.’”

More on AI 75 Artificial Intelligence (AI) Companies to Know

Job Disruption

Business automation has naturally led to fears over job losses . In fact, employees believe almost one-third of their tasks could be performed by AI. Although AI has made gains in the workplace, it’s had an unequal impact on different industries and professions. For example, manual jobs like secretaries are at risk of being automated, but the demand for other jobs like machine learning specialists and information security analysts has risen.

Workers in more skilled or creative positions are more likely to have their jobs augmented by AI , rather than be replaced. Whether forcing employees to learn new tools or taking over their roles, AI is set to spur upskilling efforts at both the individual and company level .     

“One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs,” said Klara Nahrstedt, a computer science professor at the University of Illinois at Urbana–Champaign and director of the school’s Coordinated Science Laboratory.

Data Privacy Issues

Companies require large volumes of data to train the models that power generative AI tools, and this process has come under intense scrutiny. Concerns over companies collecting consumers’ personal data have led the FTC to open an investigation into whether OpenAI has negatively impacted consumers through its data collection methods after the company potentially violated European data protection laws . 

In response, the Biden-Harris administration developed an AI Bill of Rights that lists data privacy as one of its core principles. Although this legislation doesn’t carry much legal weight, it reflects the growing push to prioritize data privacy and compel AI companies to be more transparent and cautious about how they compile training data.      

Increased Regulation

AI could shift the perspective on certain legal questions, depending on how generative AI lawsuits unfold in 2024. For example, the issue of intellectual property has come to the forefront in light of copyright lawsuits filed against OpenAI by writers, musicians and companies like The New York Times . These lawsuits affect how the U.S. legal system interprets what is private and public property, and a loss could spell major setbacks for OpenAI and its competitors. 

Ethical issues that have surfaced in connection to generative AI have placed more pressure on the U.S. government to take a stronger stance. The Biden-Harris administration has maintained its moderate position with its latest executive order , creating rough guidelines around data privacy, civil liberties, responsible AI and other aspects of AI. However, the government could lean toward stricter regulations, depending on  changes in the political climate .  

Climate Change Concerns

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Optimists can view AI as a way to make supply chains more efficient, carrying out predictive maintenance and other procedures to reduce carbon emissions . 

At the same time, AI could be seen as a key culprit in climate change . The energy and resources required to create and maintain AI models could raise carbon emissions by as much as 80 percent, dealing a devastating blow to any sustainability efforts within tech. Even if AI is applied to climate-conscious technology , the costs of building and training models could leave society in a worse environmental situation than before.   

What Industries Will AI Impact the Most?  

There’s virtually no major industry that modern AI hasn’t already affected. Here are a few of the industries undergoing the greatest changes as a result of AI.  

AI in Manufacturing

Manufacturing has been benefiting from AI for years. With AI-enabled robotic arms and other manufacturing bots dating back to the 1960s and 1970s, the industry has adapted well to the powers of AI. These  industrial robots typically work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly. 

AI in Healthcare

It may seem unlikely, but  AI healthcare is already changing the way humans interact with medical providers. Thanks to its  big data analysis capabilities, AI helps identify diseases more quickly and accurately, speed up and streamline drug discovery and even monitor patients through virtual nursing assistants. 

AI in Finance

Banks, insurers and financial institutions leverage AI for a range of applications like detecting fraud, conducting audits and evaluating customers for loans. Traders have also used machine learning’s ability to assess millions of data points at once, so they can quickly gauge risk and make smart investing decisions . 

AI in Education

AI in education will change the way humans of all ages learn. AI’s use of machine learning,  natural language processing and  facial recognition help digitize textbooks, detect plagiarism and gauge the emotions of students to help determine who’s struggling or bored. Both presently and in the future, AI tailors the experience of learning to student’s individual needs.

AI in Media

Journalism is harnessing AI too, and will continue to benefit from it. One example can be seen in The Associated Press’ use of  Automated Insights , which produces thousands of earning reports stories per year. But as generative  AI writing tools , such as ChatGPT, enter the market,  questions about their use in journalism abound.

AI in Customer Service

Most people dread getting a  robocall , but  AI in customer service can provide the industry with data-driven tools that bring meaningful insights to both the customer and the provider. AI tools powering the customer service industry come in the form of  chatbots and  virtual assistants .

AI in Transportation

Transportation is one industry that is certainly teed up to be drastically changed by AI.  Self-driving cars and  AI travel planners are just a couple of facets of how we get from point A to point B that will be influenced by AI. Even though autonomous vehicles are far from perfect, they will one day ferry us from place to place.

Risks and Dangers of AI

Despite reshaping numerous industries in positive ways, AI still has flaws that leave room for concern. Here are a few potential risks of artificial intelligence.  

Job Losses 

Between 2023 and 2028, 44 percent of workers’ skills will be disrupted . Not all workers will be affected equally — women are more likely than men to be exposed to AI in their jobs. Combine this with the fact that there is a gaping AI skills gap between men and women, and women seem much more susceptible to losing their jobs. If companies don’t have steps in place to upskill their workforces, the proliferation of AI could result in higher unemployment and decreased opportunities for those of marginalized backgrounds to break into tech.

Human Biases 

The reputation of AI has been tainted with a habit of reflecting the biases of the people who train the algorithmic models. For example, facial recognition technology has been known to favor lighter-skinned individuals , discriminating against people of color with darker complexions. If researchers aren’t careful in  rooting out these biases early on, AI tools could reinforce these biases in the minds of users and perpetuate social inequalities.

Deepfakes and Misinformation

The spread of deepfakes threatens to blur the lines between fiction and reality, leading the general public to  question what’s real and what isn’t. And if people are unable to identify deepfakes, the impact of  misinformation could be dangerous to individuals and entire countries alike. Deepfakes have been used to promote political propaganda, commit financial fraud and place students in compromising positions, among other use cases. 

Data Privacy

Training AI models on public data increases the chances of data security breaches that could expose consumers’ personal information. Companies contribute to these risks by adding their own data as well. A  2024 Cisco survey found that 48 percent of businesses have entered non-public company information into  generative AI tools and 69 percent are worried these tools could damage their intellectual property and legal rights. A single breach could expose the information of millions of consumers and leave organizations vulnerable as a result.  

Automated Weapons

The use of AI in automated weapons poses a major threat to countries and their general populations. While automated weapons systems are already deadly, they also fail to discriminate between soldiers and civilians . Letting artificial intelligence fall into the wrong hands could lead to irresponsible use and the deployment of weapons that put larger groups of people at risk.  

Superior Intelligence

Nightmare scenarios depict what’s known as the technological singularity , where superintelligent machines take over and permanently alter human existence through enslavement or eradication. Even if AI systems never reach this level, they can become more complex to the point where it’s difficult to determine how AI makes decisions at times. This can lead to a lack of transparency around how to fix algorithms when mistakes or unintended behaviors occur. 

“I don’t think the methods we use currently in these areas will lead to machines that decide to kill us,” said Marc Gyongyosi, founder of  Onetrack.AI . “I think that maybe five or 10 years from now, I’ll have to reevaluate that statement because we’ll have different methods available and different ways to go about these things.”

Frequently Asked Questions

What does the future of ai look like.

AI is expected to improve industries like healthcare, manufacturing and customer service, leading to higher-quality experiences for both workers and customers. However, it does face challenges like increased regulation, data privacy concerns and worries over job losses.

What will AI look like in 10 years?

AI is on pace to become a more integral part of people’s everyday lives. The technology could be used to provide elderly care and help out in the home. In addition, workers could collaborate with AI in different settings to enhance the efficiency and safety of workplaces.

Is AI a threat to humanity?

It depends on how people in control of AI decide to use the technology. If it falls into the wrong hands, AI could be used to expose people’s personal information, spread misinformation and perpetuate social inequalities, among other malicious use cases.

Great Companies Need Great People. That's Where We Come In.

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Harvard SEAS and GSAS banners, bagpipers, students in Crimson regalia

2024 Commencement photos

Images from the 373rd Harvard Commencement on Thursday, May 23

Academics , Applied Computation , Applied Mathematics , Applied Physics , Bioengineering , Computer Science , Environmental Science & Engineering , Events , Materials Science & Mechanical Engineering , Robotics

A green energy facility featuring solar panels, wind turbines, and a building with a prominent recycling symbol on the roof. The facility is surrounded by water and greenery.

Sustainable computing project awarded $12 million from NSF

Multi-institution research initiative aims to reduce computing’s carbon footprint by 45% within the next decade

Climate , Computer Science

A group of Harvard SEAS students and professors standing on steps in the Science and Engineering Complex

A sustainable future for data centers

SEAS students tackle the engineering implications of the ever-increasing need for data centers

Academics , Climate , Computer Science , Electrical Engineering , Environmental Science & Engineering , Materials Science & Mechanical Engineering

The AI Anthology: 20 Essays You Should Read About Our Future With AI

Chris McKay

Microsoft's Chief Scientific Officer, Eric Horvitz, has spearheaded an initiative aimed at stimulating an enriching and multidimensional conversation on the future of AI. Dubbed " AI Anthology ," the project features 20 op-ed essays from an eclectic mix of scholars and professionals providing their diverse perspectives on the transformative potential of AI.

With the backdrop of impressive leaps in AI capabilities, notably OpenAI's GPT-4, the anthology is a collaborative effort aimed at elucidating the profound ways AI can benefit humanity while exploring potential challenges. While many fear the unknowns of AI advancement, the anthology is grounded in an optimistic view of the future of AI, aiming to catalyze thought-provoking dialogue and collaborative exploration.

The anthology is a remarkable testament to the multi-faceted nature of AI implications, ranging from the arts to education, science, medicine, and the economy. Horvitz's own journey with AI began with an early glimpse into the transformative capabilities of GPT-4. His awe-inspiring experience with the AI highlighted its potential to redefine disciplinary boundaries and ignite novel integrations of traditionally disparate concepts and methodologies. Yet, it also underscored the need for careful, thoughtful exploration of potential disruptions and adverse consequences.

Four essays will be published to the AI Anthology each week, with the complete collection available on June 26, 2023. Here are the first four essays:

  • A Thinking Evolution by Alec Gallimore, a rocket scientist and Dean of Engineering at the University of Michigan, gets curious about the odyssey of AI.
  • Eradicating Inequality by Gillian Hadfield, Professor of Law and Economics at the University of Toronto, champions legal access for all.
  • Empowering Creation by Ada Palmer, Professor of History at University of Chicago, explores the possibilities of the information revolution.
  • Accessible Healthcare by Robert Wachter, Chair of the Department of Medicine at the University of California, San Francisco, examines how AI could reshape clinical care.

The contributors to the anthology represent a broad spectrum of experts. Each provides a unique perspective on the potentials and challenges of AI, covering a range of sectors, from education and healthcare to the creative arts. They were all granted early confidential access to GPT-4 and were encouraged to reflect upon two crucial questions: How might this technology and its successors contribute to human flourishing? And, how might society best guide the technology to achieve maximal benefits for humanity? These two questions, designed to explore the potential positive impact of AI, are central to the AI Anthology .

The resulting collection of essays are well worth the read! It offers an optimistic lens through which to view the future of AI and serves as a call to action for us all to join the conversation and contribute to the development of AI that promotes human flourishing.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

How artificial intelligence is transforming the world

Subscribe to the center for technology innovation newsletter, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

Jacob Taylor

May 20, 2024

Mark Schoeman

May 16, 2024

Charles Asiegbu, Chinasa T. Okolo

MIT Technology Review

  • Newsletters

The future of AI’s impact on society

As artificial intelligence continues its rapid evolution, what influence do humans have?

  • Joanna J. Bryson

Provided by BBVA

The past decade, and particularly the past few years, has been transformative for artificial intelligence, not so much in terms of what we can do with this technology as what we are doing with it. Some place the advent of this era to 2007, with the introduction of smartphones. At its most essential, intelligence is just intelligence, whether artifact or animal. It is a form of computation, and as such, a transformation of information. The cornucopia of deeply personal information that resulted from the willful tethering of a huge portion of society to the internet has allowed us to pass immense explicit and implicit knowledge from human culture via human brains into digital form. Here we can not only use it to operate with human-like competence but also produce further knowledge and behavior by means of machine-based computation.

Joanna J. Bryson is an associate professor of computer science at the University of Bath.

For decades—even prior to the inception of the term—AI has aroused both fear and excitement as humanity contemplates creating machines in our image. This expectation that intelligent artifacts should by necessity be human-like artifacts blinded most of us to the important fact that we have been achieving AI for some time. While the breakthroughs in surpassing human ability at human pursuits, such as chess, make headlines, AI has been a standard part of the industrial repertoire since at least the 1980s. Then production-rule or “expert” systems became a standard technology for checking circuit boards and detecting credit card fraud. Similarly, machine-learning (ML) strategies like genetic algorithms have long been used for intractable computational problems, such as scheduling, and neural networks not only to model and understand human learning, but also for basic industrial control and monitoring.

The future of AI's impact on society

In the 1990s, probabilistic and Bayesian methods revolutionized ML and opened the door to some of the most pervasive AI technologies now available: searching through massive troves of data. This search capacity included the ability to do semantic analysis of raw text, astonishingly enabling web users to find the documents they seek out of trillions of webpages just by typing only a few words.

AI is core to some of the most successful companies in history in terms of market capitalization—Apple, Alphabet, Microsoft, and Amazon. Along with information and communication technology (ICT) more generally, AI has revolutionized the ease with which people from all over the world can access knowledge, credit, and other benefits of contemporary global society. Such access has helped lead to massive reduction of global inequality and extreme poverty, for example by allowing farmers to know fair prices, the best crops, and giving them access to accurate weather predictions.

For decades, AI has aroused both fear and excitement as humanity contemplates creating machines in our image.

Having said this, academics, technologists, and the general public have raised a number of concerns that may indicate a need for down-regulation or constraint. As Brad Smith, the president of Microsoft recently asserted, “Information technology raises issues that go to the heart of fundamental human-rights protections like privacy and freedom of expression. These issues heighten responsibility for tech companies that create these products. In our view, they also call for thoughtful government regulation and for the development of norms around acceptable uses.”

Artificial intelligence is already changing society at a faster pace than we realize, but at the same time it is not as novel or unique in human experience as we are often led to imagine. Other artifactual entities, such as language and writing, corporations and governments, telecommunications and oil, have previously extended our capacities, altered our economies, and disrupted our social order—generally though not universally for the better. The evidence assumption that we are on average better off for our progress is ironically perhaps the greatest hurdle we currently need to overcome: sustainable living and reversing the collapse of biodiversity.

AI and ICT more generally may well require radical innovations in the way we govern, and particularly in the way we raise revenue for redistribution. We are faced with transnational wealth transfers through business innovations that have outstripped our capacity to measure or even identify the level of income generated. Further, this new currency of unknowable value is often personal data, and personal data gives those who hold it the immense power of prediction over the individuals it references.

But beyond the economic and governance challenges, we need to remember that AI first and foremost extends and enhances what it means to be human, and in particular our problem-solving capacities. Given ongoing global challenges such as security, sustainability, and reversing the collapse of biodiversity, such enhancements promise to continue to be of significant benefit, assuming we can establish good mechanisms for their regulation. Through a sensible portfolio of regulatory policies and agencies, we should continue to expand—and also to limit, as appropriate—the scope of potential AI applications.

Artificial intelligence

Sam altman says helpful agents are poised to become ai’s killer function.

Open AI’s CEO says we won’t need new hardware or lots more training data to get there.

  • James O'Donnell archive page

An AI startup made a hyperrealistic deepfake of me that’s so good it’s scary

Synthesia's new technology is impressive but raises big questions about a world where we increasingly can’t tell what’s real.

  • Melissa Heikkilä archive page

Taking AI to the next level in manufacturing

Reducing data, talent, and organizational barriers to achieve scale.

  • MIT Technology Review Insights archive page

Is robotics about to have its own ChatGPT moment?

Researchers are using generative AI and other techniques to teach robots new skills—including tasks they could perform in homes.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

A business journal from the Wharton School of the University of Pennsylvania

What Is the Future of AI?

November 9, 2023 • 26 min read.

If we want to coexist with AI, it’s time to stop viewing it as a threat, Wharton professors say.

Businessman thinking about the future of AI as he looks hopefully at a cityscape with a text overlay that reads "AI in Focus"

AI is here and it’s not going away. Wharton professors Kartik Hosanagar and Stefano Puntoni join Eric Bradlow, vice dean of Analytics at Wharton, to discuss how AI will affect business and society as adoption continues to grow. How can humans work together with AI to boost productivity and flourish? This interview is part of a special 10-part series called “AI in Focus.”

Watch the video or read the full transcript below.

Eric Bradlow: Welcome, everyone, to the first episode of the Analytics at Wharton and AI at Wharton podcast series on artificial intelligence. My name’s Eric Bradlow. I’m a professor of marketing and statistics here at the Wharton School. I’m also vice dean of Analytics at Wharton, and I will be the host for this multi-part series on artificial intelligence.

I can think of no better way to start that series, with two of my friends and colleagues who actually run our Center on Artificial Intelligence. The title of this episode is “Artificial Intelligence is Here.” As you will hear, we’ll do episodes on artificial intelligence in sports, artificial intelligence in real estate, artificial intelligence in health care. But I think it’s best to start just with the basics.

I’m very happy to have join with me today, first, my colleague Kartik Hosanagar. Kartik is the John C. Hower Professor at the Wharton School. He’s also, as I mentioned, the co-director of our Center on Artificial Intelligence at Wharton. And normally, I don’t read someone’s bio. First of all, it’s only a few sentences. But I think this actually is important for our listeners to understand the breadth and also the practicality of Kartik’s work. His research examines how AI impacts business and society, and something you’ll hear about is, that is what our center does. There’s kind of two prongs. Second, he was a founder of Yodle, where he applied AI to online advertising. And more recently and currently, to Jumpcut Media, a company applying AI to democratize Hollywood. He also teaches our courses on enabling technologies and AI business and society. Kartik, welcome.

Kartik Hosanagar: Thanks for having me, Eric.

Bradlow: I’m also happy to have my colleague, Stefano Puntoni. Stefano is the Sebastian S. Kresge Professor of Marketing here at the Wharton School. He’s also, along with Kartik, the co-Director of our Center on AI at Wharton. And his research examines how artificial intelligence and automation are changing consumption and society. And similar to Kartik, he also teaches our courses on artificial intelligence, brand management, and marketing strategies. Stefano, welcome.

Stefano Puntoni: Thank you very much.

Bradlow: It’s great to be with both of you. So maybe, Kartik, I’ll throw the first question out to you. While artificial intelligence is now the big thing that every company is thinking about, what do you see as— well, first of all, maybe even before what are challenges facing companies, how would we even define what artificial intelligence is? Because it can mean lots of things. It could mean everything from taking texts and images and stuff like that, and kind of quantifying it, or it could be generative AI, which is the same side of the coin, but a different part. How do you even view, what does it mean to say “artificial intelligence”?

Hosanagar: Yeah. Artificial Intelligence is a field of computer science which is focused on getting computers to do the kinds of things that traditionally requires human intelligence. What that is, is a moving target. When computers couldn’t play, say, a very simple game like— well, chess is not simple, but maybe even simpler board games. Maybe that’s the target. And then when you say computers can play chess, and when that’s easy for computers, we no longer think of that as AI.

But really, today, when we think about what is AI, it’s again, getting computers to do the kinds of things that require human intelligence. Like understand language. Like navigate the physical world. Like being able to learn from experiences, from data. So, all of that really is included in AI.

Bradlow: Do you put any separation between what I call— maybe I’m not even using the right words — traditional AI, which again back in my old days, we’ve had AI around, “How do you take an image, and turn it into something?” “How do we take video, how do we take text?” That’s one form of AI versus what’s got everybody excited today, which is ChatGPT, which is a form of large language model. Do you put any differentiation there? Or that’s just a way for us to understand. One is creation of data, and the other one is using it in an application of forecast and language.

Hosanagar: Yeah, I feel there is some distinction. But ultimately, they’re closely related. Because what we think of as the more traditional AI, or predictive AI, it’s all about taking data and understanding the landscape of the data. And to be able to say, “In this region of the data,” let’s say you’re predicting whether an image is about Bob, or is it about Lisa? And so you kind of say, “In the image space, this region, if the shape of the colors are like this, the shape of the eyes are like this, then it’s Bob. In that area, it’s Lisa.” And so on. So, it’s mostly understanding the space of data, and being able to say, with emails, is it fraudulent or not? And saying which portion of the space does it have one value versus the other.

Now, once you started getting really good at predicting that, then you can start to use those predictions to create. And that’s where it’s the next step, where it becomes generative AI. Where now you are predicting, what’s the next word? You might as well use it to start generating text, and start generating sentences, essays and novels, and so on.

Bradlow: Stefano, let me ask you a question. If one went to your web site on the Wharton web site — and by the way. Just for our listeners, Stefano has a lot of deep training in statistics. But most people would say, “You’re not a computer scientist. You’re not a mathematician. What the hell do you have to do with artificial intelligence?” Like, “What role does consumer psychology play in artificial intelligence today? Isn’t it just for us math types?”

Puntoni: If you talk to companies and you ask them why did your analytics program fail, you almost never hear the answer, “Because the models don’t work. Because the techniques didn’t deliver.” It’s never about the technical stuff. It’s always about people. It’s about lack of vision. It’s about the lack of alignment between decision makers and analysts. It’s about the lack of clarity about why we do analytics. So, I think that a behavioral science perspective on analytics can bring a lot of benefits to try to understand how do we connect decisions in companies to the data that we have? That takes both the technical skills and the human insights, the psychology insights. I think bringing those together, I find that has a lot of value and a lot of potential insights. A lot of low-hanging fruits, in fact, in companies, I think.

Bradlow: As a follow-up question, we all read these articles that say 70% of the jobs are going to go away, and robots or automation or AI is going to put me out of business. Should employees be happy with what’s going on in AI? Or the answer is, it depends who you are and what you’re doing? What are your thoughts? And then Kartik, I’d love to get your thoughts on that, including the work you’re doing at Jumpcut. Because we all know one of the biggest issues in the current writer’s strike was actually what’s going to happen with artificial intelligence? I’d love to hear your thoughts from the psychology or the employee motivation perspective, and then, what are you seeing actually out in the real world?

Puntoni: The academic answer to any question would be, “It depends. It depends.” But in my research, what I’ve been looking at is the extent to which people perceive automation as a threat. And what we find is that oftentimes when tasks are being automated by AI, for example, our tasks have to have some kind of meaning to the person. That they are essential to the way that they see themselves, for example, in their professional identity. That can create a lot of threat.

So, you have psychological threats, and then you have these objective threats of maybe jobs on the line. And maybe you’ll feel happy about knowing that I try out the professor job on some of these scoring algorithms, and we are fairly safe from our own replacement.

Bradlow: Kartik, let me ask you. And let me just preface this with saying, you probably don’t even know about this. Fifteen years ago, I wrote a paper with a former colleague and a doctoral student about how to use— I didn’t call it AI back then. But how to, basically, in large scale, compute features of advertisements and optimally design advertisements based on a massive number of features. And I remember the reaction. I first thought I was going to get rich. I went to every big media agency and said, “You can fire all your creative people. I know how to create these ads using mathematics.” And I was looked at like I had four heads. So, can you bring us up to the year 2023? Can you tell us what you’re doing at Jumpcut, and what role AI machine learning plays in your company, and just what you see going on in the creative world?

Hosanagar: Yeah. And I’ll connect that to, also, what you and Stefano just brought up about AI and jobs and exposure to AI and so on. I just came from a real estate conference. And the panel before I spoke was talking about, “Hey, this artificial intelligence, it’s not really intelligence. It just replicates whatever in some data. The true human intelligence is creative, problem-solving, and so on.” And I was sharing over there that there are multiple studies now that talk about what can AI do, and cannot do. For example, my colleague, Daniel Rock, has a study where he shows that just LLMs, meaning large language models like ChatGPT, and before the advances of the last six months— this is as of early 2023— they found that 50% of jobs have at least 10% of their tasks exposed to LLMs. And 20% of jobs have more than 50% of their tasks exposed to LLM. And that’s not all of AI, that’s just large language models. And that’s also 10 months ago.

And people also underestimate the nature of exponential change. I’ve been working with GPT2, GPT3, the earlier models of this. And I can say every year the change is order of magnitude. And so, you know, it’s coming. And it’s going to affect all kinds of jobs. Now, as of today, I can say that multiple research studies— and I don’t mean two, three, four— but several dozen research studies that have looked at AI’s use in multiple settings, including creative settings like writing poems or problem-solving or so on— find that AI today already can match humans. But human plus AI today beats both human alone and AI alone.

For me, the big opportunity with AI is we are going to see productivity boost like we’ve never seen before in the history of humanity. And that kind of productivity boost allows us to outsource the grunt work to AI, and do the most creative things, and derive joy from our work. Now, does that mean it’s all going to be beautiful for all of us? No. There are going to be some of us who, if we don’t reskill — if we don’t focus on having skills that require creativity, empathy, teamwork, leadership, those kinds of skills — then a lot of the other jobs are going away, including knowledge work. Consulting, software development. It’s coming into all of these.

Bradlow: Stefano, something Kartik mentioned in his last thing was about humans and AI. As a matter of fact, one of the things I heard you say from the beginning is, it’s not humans or AI. It’s humans and AI. How do you really see that interface going forward? Is it up to the individual worker to decide what part of his/her/their tasks to outsource? Is it up to management? How do you see people being even willing to skill themselves up in artificial intelligence? How do you see this?

Puntoni: I think this is the biggest question that any company should be asking, not just about AI right now. Frankly, I think the biggest question of all in business — how do we use these tools? How do we learn how to use them? There is no template. Nobody really knows how, for example, generative AI is going to impact different functions. We’re just learning about these tools, and these tools are still getting better.

What we need to do is to have some deliberate experimentation. We need to build processes for learning such that we have individuals within the organizations tasked with just understanding what this can do. And there’s going to be an impact on individuals. It’s going to be an impact on teams, on work flows. How do we bring this in, in a way that we just maybe don’t simply think of re-engineering a task to get a human out of the picture. But how do we re-engineer new ways of working such that we can get the most out of people? The point shouldn’t be human replacement and obsolescence. It should be human flourishing. How do we take this amazing technology to make our work more productive, more meaningful, more impactful, and ultimately make society better?

Bradlow: Kartik, let me take what Stefano said and combine it with something that you said earlier, which was about the exponential growth rate. My biggest fear if I were working at a company today — and please, I’d love your thoughts— is that someone’s using a version of ChatGPT, or some large language model, or even predictive model. Some transformer model. And they fit it today, and they say, “See? The model can’t do this.” And then two weeks later, the model can do this. Companies, in some sense, create these absolutes. Like, you just mentioned you were at a real estate company. “Well, ChatGPT or large language models, AI, can’t sell homes. They can build massive predictive models using satellite data.” Maybe they can’t today, but maybe they can tomorrow. How do you, in some sense, try to help both researchers and companies move away from absolutes in a time of exponential growth of these methods?

Hosanagar: Yeah. I think our brains fundamentally struggle with exponential change. And probably, there is some basis to this in studies people have done on neuroscience or human evolution and so on. But we struggle with it. And I see this all the time, because I’ve been part of that. My work has been part of that exponential change from the very beginning. When I started my Ph.D., it was about the internet. And I can’t tell you the number of people who looked at the internet at any given point of time and said, “Nobody will buy clothing online. Nobody will buy eyeglasses online. Nobody would do this. Nobody would do that.” And I’m like, “No, no. It’s all happening. Just wait to see what’s coming.”

I think it’s hard for people to fathom. I think leadership, as well as regulators, need to realize what’s coming, understand what exponential change is, and start to work. You brought up previously, and I forgot to address it, about the Hollywood writer’s strike. Now, it is true that today, ChatGPT cannot write a great model. However, when we work with writers, we are already seeing how they can increase the productivity for writers. And in Hollywood, for example, writers are notorious because writing is driven by inspiration. You’re expecting the draft today. And what’s the excuse? “Oh, I’m just stuck at this point. And when I get unstuck, I’ll write again.” You can wait months and sometimes years for the writer to get unstuck.

Now, you give them a brainstorming buddy, and they start getting unstuck and it increases productivity. And yes, they’re right in fearing that at some point they’re going to keep interacting with the AI, and keep training the AI, and someday the AI is going to say, “You know what? I’m going to try to write the script myself.” And when I say the AI is going to say that, I mean the AI is going to be good enough, and some executive is going to say, “Why deal with humans?” And do that.

I think we need to both recognize that change is that fast and start experimenting and start learning. And people need to start upping their game and reskilling and get really good at using AI to do what they do. That reskilling is important. Stop viewing this as a threat. Because what’s happening is, you’re standing somewhere and there’s a fast bullet train coming at you. And you’re saying, “That train is going to stop on its own.” No, it’s going to run over you. And the only thing you can do and you have to do is get to the station, board the train, and be part of that train and help shape where it goes. All of us need to help shape where it goes.

Bradlow: Yeah. One example I like to give is that for 25-plus years I’ve been doing statistical analysis in R. And of course, for the last five to seven years, Python’s taken a much larger role. And I always promised myself I was going to learn Python. Well, I’ve learned Python now. I stick my R code into ChatGPT, and I tell it to convert it to Python. And I’m actually a damn good Python programmer now, because ChatGPT has helped me take structured R code and turn it into Python code.

Hosanagar: That’s a great example. And I’ll give you two more examples like that. The head of product at my company, Jumpcut Media, had this idea for a script summarization tool. What happens in Hollywood is the vast majority of scripts written are never read because every executive gets so many scripts. And you have no time to read anything. And you end up prioritizing based on gut and relationships. “Eric’s my buddy. I’ll read his script, but not this guy, Stefano, who just sent me a script. I don’t know him.” And that’s how decision-making works in Hollywood.

So, the head of product, who’s not a coder — he’s actually a Wharton alumnus — had this idea for a great script summarization tool that would summarize things using the language and parlance of Hollywood. And he had the idea to build the tool, but he’s not a coder. Our engineers were too busy with other efforts, so he said, “While they’re doing that, let me try it on ChatGPT.” And he built the entire minimal viable product, a demo version of it, on his own, using ChatGPT. And it’s actually on our web site on Jumpcut Media, where our clients can try it. And that’s how it got built. A guy with no development skills.

I actually demonstrated, during this real estate conference, this idea that you post a video on YouTube, you’ve got 30,000 comments on YouTube, and you want to analyze those comments and figure out, what are people saying? You want to summarize it. I went to ChatGPT, and I said, “Six steps. First step, go to a YouTube URL I’ll share, download all the comments. Second step, do sentiment analysis of that. Third step, find the comments which are positive and send it to OpenAI and give me the summary of all the positive comments. Fourth step, negative comments, send it to OpenAI, give the summary. Fifth step, tell the marketing manager what you should do, and give me the code for all this.” It gave me the code in the conference with all these people. I put it in Google Collab, ran it, and now we’ve got the summary. And this is me writing not a single line of code, with ChatGPT. It’s not the most complex code, but this is something that previously would have taken me days and I would have had to involve RAs and so on. And I can get that done.

Bradlow: Imagine in real estate doing that about a property, or a developer. And you say it doesn’t affect real estate. Of course it does! Absolutely, it could.

Hosanagar: It does. I also showed them, I uploaded four photographs of my home. Nothing else. Four photographs. And I said, “I’m planning to list this home for sale. Give me a real estate listing to post on Zillow that will make people read it and get excited to come and tour this house.” And it gave a great, beautiful description. There’s no way I could have written that. I challenged them, how many of you could have written this? And everyone at the end was like, “Wow. I was blown away.” And that is something that is doable today. I’m not even talking where this is coming soon.

Bradlow: Stefano, I’m going to ask you and then I’ll ask Kartik as well, what’s at the leading edge of the research you’re doing right now? I want to ask each of you about your own research, and then I’ll spend the last few minutes that we have talking about AI at Wharton and what you guys are doing and hoping to accomplish. Let’s start with our own personal research. What are you doing right now? Another way I like to frame it is, if we’re sitting here five years from now and you have a bunch of published papers and you’ve given a lot of big podium talks, which I know you do, what are you talking about that you had worked on?

Puntoni: Working on a lot of projects, all in the area of AI. And there are so many exciting questions. Because we never had a machine like this, a machine that can do the stuff that we think is crucial to defining what a human is. This is actually an interesting thing to consider. When you went back in time a few years and you asked, “What makes humans special?” people were thinking, maybe compared to other animals, “We can think.” And now you ask, “What makes a human special?” and people think, “Oh, we have emotions, or we feel.

Basically now, what makes us special is what makes us the same as other animals, to some extent. You see how the world is really deeply changing. And I’m interested in, for example, the impact of AI for the pursuit of relational goals, or social goals, or emotionally heavy types of tasks, where previously we never had an option of engaging with a machine, but now we do. What does that mean? What are the benefits that this technology can bring, but also, what might be the dangers? For example, for consumer safety, as people might interact with these tools while experiencing mental health issues or other problems. To me, that’s a very exciting and important area.

I just want to make a point that this technology doesn’t have to be any better than it is today for it to change many, many things. I mean, Kartik was saying, rightly, this is still improving exponentially. And companies are just starting to experiment with it. But the tools are there. This is not a technology around the corner. It’s in front of us.

Bradlow: Kartik, what are the big open issues that you’re thinking about and working on today?

Hosanagar: Eric, there are two aspects to my work. One is slightly more technical, and the other is focused more on humans and societal interactions with AI. On the former side, I’m spending a lot of time thinking about biases in machine-learning models, in particular a few studies related to biases in text-to-image models. For example, you go in and you write a prompt, “Generate an image of a child studying astronomy.” If all 100 images are of a boy studying astronomy, then you know there’s an issue. And these models do have these biases, just because the training data sets have that. But if I get an individual image, how do I know it’s OK or not? We’re doing some work on detecting bias, debiasing, on automated prompt engineering as well. So, you state what you want, and we’ll figure out how to structure the prompt for a machine learning model to get the kind of output you want. That’s a bit on the technical side.

On the human and AI side, most of my interest is around two themes. One is human-AI collaboration. So, if you look at any workflow in any organization where AI now can touch that workflow, we do not understand today what is ideally done by humans and what is done by AI. In terms of organization design and process design, we understand historically, for example, how to structure teams, how to build team dynamics. But if the team is AI and humans, how do we structure that? What should be done by whom? I have some work going on there.

And the other one is around trust. AI has a huge trust problem today. We were just talking about the writers’ strike. There’s an actors’ strike, and many more issues coming up. So, what does it take to drive human trust and engagement with AI is another theme I’m looking at.

Bradlow: Maybe in the last few minutes or so, Stefano, can you tell us a little bit, and our listeners here on Sirius XM and on our podcast, about AI at Wharton and what you’re hoping to study and accomplish through a center on artificial intelligence here at Wharton? And then we’ll get Kartik’s thoughts as well.

Puntoni: Thank you for organizing this podcast, and Sirius for having us. I think it’s a great opportunity to get the word out. The initiative AI at Wharton is just starting out. We are a bunch of academics working on AI, tackling AI from different angles for the purpose of understanding what it can do for companies, how it can improve decision-making in companies. But also, what are the implications for all of us? As workers, as consumers, and society broadly?

We’re going to try initiatives around education, around research, around dissemination of research findings, and generally, try to create a community of people who are interested in these topics. They’re asking similar questions, maybe in very different ways, and can learn from one another.

Bradlow: And Kartik, what are your thoughts? You’ve been involved with lots of centers over the years. What makes AI at Wharton special, and why are you so excited to be in one of the leadership positions of it?

Hosanagar: Yeah. I think, first of all, to me, AI is maybe not even a once-a-generation, but once-several-generation kind of technologies. And it’s going to open up so many questions that will not be answered unless we create initiatives like ours. For example, today, computer scientists are focused on creating new and better models. But they’re focused on assessing these models somewhat narrowly, in terms of accuracy of the model, and so on, and not necessarily human impact, societal impact, some of these other questions.

At the same time, industry is affected by a lot of this. But they’re trying to put the fire out, and they’re focused on, what do they need to get done this week, next week? They’re very interested in the questions of, where will this take us three, four years later? But they have to focus quarter by quarter.

I think we are uniquely positioned, here at Wharton, in terms of having both the technical chops to understand those computer science models and what they’re doing, as well as people like Stefano and others who understand the psychological and the social science frameworks, who can bring in that perspective and really take a five, 10, 15, 25-year timeline on this and figure out, what does this mean for how organizations need to be redesigned? What does this mean in terms of how people need to be reskilled? How do our own college students need to be reskilled?

What does this mean for regulation? Because, man, regulators are going to struggle with this. And while the technology is moving exponentially, regulators are moving linearly. They will need that thought leadership as well. So, I think we fill that gap uniquely in terms of those kinds of problems. Big, open issues that are going to hit us in five, 10 years, but we are currently too busy putting out the fires to worry about the big avalanche coming our way.

Bradlow: Well, I think anybody that has listened to this episode will agree, artificial intelligence is here — which is what the title of this episode was. Again, I’m Eric Bradlow, professor of marketing and statistics here at the Wharton School, and vice dean of analytics. I’d like to think my colleagues, Stefano Puntoni and Kartik Hosanagar. Thank you for joining us on this episode.

Hosanagar: Thank you, Eric.

Puntoni: Thank you.

More From Knowledge at Wharton

future of ai technology essay

Why Do So Many EV Startups Fail?

future of ai technology essay

Four Pillars of Decision-driven Analytics

future of ai technology essay

Five Myths About Generative AI That Leaders Should Know

Looking for more insights.

Sign up to stay informed about our latest article releases.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NATURE INDEX
  • 09 December 2020

Six researchers who are shaping the future of artificial intelligence

  • Gemma Conroy ,
  • Hepeng Jia ,
  • Benjamin Plackett &

You can also search for this author in PubMed   Google Scholar

Andy Tay is a science writer in Singapore.

As artificial intelligence (AI) becomes ubiquitous in fields such as medicine, education and security, there are significant ethical and technical challenges to overcome.

CYNTHIA BREAZEAL: Personal touch

Illustrated portrait of Cynthia Breazeal

Credit: Taj Francis

While the credits to Star Wars drew to a close in a 1970s cinema, 10-year-old Cynthia Breazeal remained fixated on C-3PO, the anxious robot. “Typically, when you saw robots in science fiction, they were mindless, but in Star Wars they had rich personalities and could form friendships,” says Breazeal, associate director of the Massachusetts Institute of Technology (MIT) Media Lab in Cambridge, Massachusetts. “I assumed these robots would never exist in my lifetime.”

A pioneer of social robotics and human–robot interaction, Breazeal has made a career of conceptualizing and building robots with personality. As a master’s student at MIT’s Humanoid Robotics Group, she created her first robot, an insectile machine named Hannibal that was designed for autonomous planetary exploration and funded by NASA.

Some of the best-known robots Breazeal developed as a young researcher include Kismet, one of the first robots that could demonstrate social and emotional interactions with humans; Cog, a humanoid robot that could track faces and grasp objects; and Leonardo, described by the Institute of Electrical and Electronics Engineers in New Jersey as “one of the most sophisticated social robots ever built”.

future of ai technology essay

Nature Index 2020 Artificial intelligence

In 2014, Breazeal founded Jibo, a Boston-based company that launched her first consumer product, a household robot companion, also called Jibo. The company raised more than US$70 million and sold more than 6,000 units. In May 2020, NTT Disruption, a subsidiary of London-based telecommunications company, NTT, bought the Jibo technology, and plans to explore the robot’s applications in health care and education.

Breazeal returned to academia full time this year as director of the MIT Personal Robots Group. She is investigating whether robots such as Jibo can help to improve students’ mental health and wellbeing by providing companionship. In a preprint published in July, which has yet to be peer-reviewed, Breazeal’s team reports that daily interactions with Jibo significantly improved the mood of university students ( S. Jeong et al . Preprint at https://arxiv.org/abs/2009.03829; 2020 ). “It’s about finding ways to use robots to help support people,” she says.

In April 2020, Breazeal launched AI Education, a free online resource that teaches children how to design and use AI responsibly. “Our hope is to turn the hundreds of students we’ve started with into tens of thousands in a couple of years,” says Breazeal. — by Benjamin Plackett

CHEN HAO: Big picture

Illustrated portrait of Chen Hao

Analysing medical images is an intensive and technical task, and there is a shortage of pathologists and radiologists to meet demands. In a 2018 survey by the UK’s Royal College of Pathologists, just 3% of the National Health Service histopathology departments (which study diseases in tissues) said they had enough staff. A June 2020 report published by the Association of American Medical Colleges found that the United States’ shortage of physician specialists could climb to nearly 42,000 by 2033.

AI systems that can automate part of the process of medical imaging analysis could be the key to easing the burden on specialists. They can reduce tasks that usually take hours or days to seconds, says Chen Hao, founder of Imsight, an AI medical imaging start-up based in Shenzhen, China.

Launched in 2017, Imsight’s products include Lung-Sight, which can automatically detect and locate signs of disease in CT scans, and Breast-Sight, which identifies and measures the metastatic area in a tissue sample. “The analysis allows doctors to make a quick decision based on all of the information available,” says Chen.

Since the outbreak of COVID-19, two of Shenzhen’s largest hospitals have been using Imsight’s imaging technology to analyse subtle changes in patients’ lungs caused by treatment, which enables doctors to identify cases with severe side effects.

In 2019, Chen received the Young Scientist Impact Award from the Medical Image Computing and Computer-Assisted Intervention Society, a non-profit organization in Rochester, Minnesota. The award recognized a paper he led that proposed using a neural network to process fetal ultrasound images ( H. Chen et al. in Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015 (eds N. Navab et al. ) 507–514; Springer, 2015 ). The technique, which has since been adopted in clinical practice in China, reduces the workload of the sonographer.

Despite the rapid advancement of AI’s role in health care, Chen rejects the idea that doctors can be easily replaced. “AI will not replace doctors,” he says. “But doctors who are better able to utilize AI will replace doctors who cannot.” — by Hepeng Jia

ANNA SCAIFE: Star sifting

Illustrated portrait of Anna Scaife

When construction of the Square Kilometre Array (SKA) is complete , it will be the world’s largest radio telescope. With roughly 200 radio dishes in South Africa and 130,000 antennas in Australia expected to be installed by the 2030s, it will produce an enormous amount of raw data, more than current systems can efficiently transmit and process.

Anna Scaife, professor of radio astronomy at the University of Manchester, UK, is building an AI system to automate radio astronomy data processing. Her aim is to reduce manual identification, classification and cataloguing of signals from astronomical objects such as radio galaxies, active galaxies that emit more light at radio wavelengths than at visible wavelengths.

In 2019, Scaife was the recipient of the Jackson-Gwilt Medal, one of the highest honours bestowed by the UK Royal Astronomical Society (RAS). The RAS recognized a study led by Scaife, which outlined data calibration models for Europe’s Low Frequency Array (LOFAR) telescope, the largest radio telescope operating at the lowest frequencies that can be observed from Earth ( A. M. M. Scaife and G. H. Heald Mon. Not. R. Astron. Soc. 423 , L30–L34; 2012 ). The techniques in Scaife’s paper underpin most low-frequency radio observations today.

“It’s a very peculiar feeling to win an RAS medal,” says Scaife. “It’s a mixture of excitement and disbelief, especially because you don’t even know that you were being considered, so you don’t have any opportunity to prepare yourself. Suddenly, your name is on a list that commemorates more than 100 years of astronomy history, and you’ve just got to deal with that.”

Scaife is the academic co-director of Policy@Manchester, the University of Manchester’s policy engagement institute, where she helps researchers to better communicate their findings to policymakers. She also runs a data science training network that involves South African and UK partner universities, with the aim to build a team of researchers to work with the SKA once it comes online. “I hope that the training programmes I have developed can equip young people with skills for the data science sector,” says Scaife. — by Andy Tay

TIMNIT GEBRU: Algorithmic bias

Illustrated portrait of Timnit Gebru

Computer vision is one of the most rapidly developing areas of AI. Algorithms trained to read and interpret images are the foundation of technologies such as self-driving cars, surveillance and augmented reality.

Timnit Gebru, a computer scientist and former co-lead of the Ethical AI Team at Google in Mountain View, California, recognizes the promise of such advances, but is concerned about how they could affect underrepresented communities, particularly people of colour . “My research is about trying to minimize and mitigate the negative impacts of AI,” she says.

In a 2018 study , Gebru and Joy Buolamwini, a computer scientist at the MIT Media Lab, concluded that three commonly used facial analysis algorithms drew overwhelmingly on data obtained from light-skinned people ( J. Buolamwini and T. Gebru. Proc. Mach. Learn. Res. 81 , 77–91; 2018 ). Error rates for dark-skinned females were found to be as high as 34.7% , due to a lack of data, whereas the maximum error rate for light-skinned males was 0.8%. This could result in people with darker skin getting inaccurate medical diagnoses, says Gebru. “If you’re using this technology to detect melanoma from skin photos, for example, then a lot of dark-skinned people could be misdiagnosed.”

Facial recognition used for government surveillance, such as during the Hong Kong protests in 2019, is also highly problematic , says Gebru, because the technology is more likely to misidentify a person with darker skin. “I’m working to have face surveillance banned,” she says. “Even if dark-skinned people were accurately identified, it’s the most marginalized groups that are most subject to surveillance.”

In 2017, as a PhD student at Stanford University in California under the supervision of Li Fei-Fei , Gebru co-founded the non-profit, Black in AI, with Rediet Abebe, a computer scientist at Cornell University in Ithaca, New York. The organization seeks to increase the presence of Black people in AI research by providing mentorship for researchers as they apply to graduate programmes, navigate graduate school, and enter and progress through the postgraduate job market. The organization is also advocating for structural changes within institutions to address bias in hiring and promotion decisions. Its annual workshop calls for papers with at least one Black researcher as the main author or co-author. — by Benjamin Plackett

YUTAKA MATSUO: Internet miner

Illustrated portrait of Yutaka Matsuo

In 2010, Yutaka Matsuo created an algorithm that could detect the first signs of earthquakes by monitoring Twitter for mentions of tremors. His system not only detected 96% of the earthquakes that were registered by the Japan Meteorological Agency (JMA), it also sent e-mail alerts to registered users much faster than announcements could be broadcast by the JMA.

He applied a similar web-mining technique to the stock market. “We were able to classify news articles about companies as either positive or negative,” says Matsuo. “We combined that data to accurately predict profit growth and performance.”

Matsuo’s ability to extract valuable information from what people are saying online has contributed to his reputation as one of Japan’s leading AI researchers. He is a professor at the University of Tokyo’s Department of Technology Management and president of the Japan Deep Learning Association, a non-profit organization that fosters AI researchers and engineers by offering training and certification exams. In 2019, he was the first AI specialist added to the board of Japanese technology giant Softbank.

Over the past decade, Matsuo and his team have been supporting young entrepreneurs in launching internationally successful AI start-ups. “We want to create an ecosystem like Silicon Valley, which Japan just doesn’t have,” he says.

Among the start-ups supported by Matsuo is Neural Pocket, launched in 2018 by Roi Shigematsu, a University of Tokyo graduate. The company analyses photos and videos to provide insights into consumer behaviour.

Matsuo is also an adviser for ReadyFor, one of Japan’s earliest crowd-funding platforms. The company was launched in 2011 by Haruka Mera, who first collaborated with Matsuo as an undergraduate student at Keio University in Tokyo. The platform is raising funds for people affected by the COVID-19 pandemic, and reports that its total transaction value for donations rose by 4,400% between March and April 2020.

Matsuo encourages young researchers who are interested in launching AI start-ups to seek partnerships with industry. “Japanese society is quite conservative,” he says. “If you’re older, you’re more likely to get a large budget from public funds, but I’m 45, and that’s still considered too young.” — by Benjamin Plackett

DACHENG TAO: Machine visionary

Illustrated portrait of Dacheng Tao

By 2030, an estimated one in ten cars globally will be self-driving. The key to getting these autonomous vehicles on the road is designing computer-vision systems that can identify obstacles to avoid accidents at least as effectively as a human driver .

Neural networks, sets of AI algorithms inspired by neurological processes that fire in the human cerebral cortex, form the ‘brains’ of self-driving cars. Dacheng Tao, a computer scientist at the University of Sydney, Australia, designs neural networks for computer-vision tasks. He is also building models and algorithms that can process videos captured by moving cameras, such as those in self-driving cars.

“Neural networks are very useful for modelling the world,” says Tao, director of the UBTECH Sydney Artificial Intelligence Centre, a partnership between the University of Sydney and global robotics company UBTECH.

In 2017, Tao was awarded an Australian Laureate Fellowship for a five-year project that uses deep-learning techniques to improve moving-camera computer vision in autonomous machines and vehicles. A subset of machine learning, deep learning uses neural networks to build systems that can ‘learn’ through their own data processing.

Since launching in 2018, Tao’s project has resulted in more than 40 journal publications and conference papers. He is among the most prolific researchers in AI research output from 2015 to 2019, as tracked by the Dimensions database, and is one of Australia’s most highly cited computer scientists. Since 2015, Tao’s papers have amassed more than 42,500 citations, as indexed by Google Scholar. In November 2020, he won the Eureka Prize for Excellence in Data Science, awarded by the Australian Museum.

In 2019, Tao and his team trained a neural network to construct 3D environments using a motion-blurred image, such as would be captured by a moving car. Details, including the motion, blurring effect and depth at which it was taken, helped the researchers to recover what they describe as “the 3D world hidden under the blurs”. The findings could help self-driving cars to better process their surroundings. — by Gemma Conroy

Nature 588 , S114-S117 (2020)

doi: https://doi.org/10.1038/d41586-020-03411-0

This article is part of Nature Index 2020 Artificial intelligence , an editorially independent supplement. Advertisers have no influence over the content.

Related Articles

future of ai technology essay

Partner content: Advancing precision medicine using AI and big data

Partner content: Using AI to accelerate drug discovery

Partner content: Using AI to make healthcare more human

Partner content: Strengthening links in the discovery chain

Partner content: LMU Munich harnesses AI to drive discovery

Partner content: Breaking AI out of the computer science bubble

Partner content: Discovering a theory to visualize the world

Partner content: Supporting the technology game-changers

Partner content: Data-driven AI illuminates the future

Partner content: The humans at the heart of AI

Partner content: New reach for computer imaging

Partner content: Building natural trust in artificial intelligence

Partner content: Raising a global centre for deep learning

Partner content: Japan’s new centre of gravity for clinical data science

Partner content: AI researchers wanted in Germany

  • Computer science
  • Institutions

AlphaFold3 — why did Nature publish it without its code?

AlphaFold3 — why did Nature publish it without its code?

Editorial 22 MAY 24

AI now beats humans at basic tasks — new benchmarks are needed, says major report

AI now beats humans at basic tasks — new benchmarks are needed, says major report

News 15 APR 24

High-threshold and low-overhead fault-tolerant quantum memory

High-threshold and low-overhead fault-tolerant quantum memory

Article 27 MAR 24

Protests over Israel-Hamas war have torn US universities apart: what’s next?

Protests over Israel-Hamas war have torn US universities apart: what’s next?

News Explainer 22 MAY 24

Dozens of Brazilian universities hit by strikes over academic wages

Dozens of Brazilian universities hit by strikes over academic wages

News 08 MAY 24

France’s research mega-campus faces leadership crisis

France’s research mega-campus faces leadership crisis

News 03 MAY 24

Software tools identify forgotten genes

Software tools identify forgotten genes

Technology Feature 24 MAY 24

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Guidelines for academics aim to lessen ethical pitfalls in generative-AI use

Nature Index 22 MAY 24

Internet use and teen mental health: it’s about more than just screen time

Correspondence 21 MAY 24

Data Analyst for Gene Regulation as an Academic Functional Specialist

The Rheinische Friedrich-Wilhelms-Universität Bonn is an international research university with a broad spectrum of subjects. With 200 years of his...

53113, Bonn (DE)

Rheinische Friedrich-Wilhelms-Universität

future of ai technology essay

Recruitment of Global Talent at the Institute of Zoology, Chinese Academy of Sciences (IOZ, CAS)

The Institute of Zoology (IOZ), Chinese Academy of Sciences (CAS), is seeking global talents around the world.

Beijing, China

Institute of Zoology, Chinese Academy of Sciences (IOZ, CAS)

future of ai technology essay

Full Professorship (W3) in “Organic Environmental Geochemistry (f/m/d)

The Institute of Earth Sciences within the Faculty of Chemistry and Earth Sciences at Heidelberg University invites applications for a   FULL PROFE...

Heidelberg, Brandenburg (DE)

Universität Heidelberg

future of ai technology essay

Postdoc: deep learning for super-resolution microscopy

The Ries lab is looking for a PostDoc with background in machine learning.

Vienna, Austria

University of Vienna

future of ai technology essay

Postdoc: development of a novel MINFLUX microscope

The Ries lab is developing super-resolution microscopy methods for structural cell biology. In this project we will develop a fast, simple, and robust

future of ai technology essay

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

What’s the future of AI?

May 5, 2024 We’re in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. To outcompete in the future, organizations and individuals alike need to get familiar fast. We don’t know exactly what the future will look like. But we do know that these seven technologies will play a big role. This series of McKinsey Explainers , which draws on insights from articles by McKinsey’s Eric Lamarre , Rodney W. Zemmel , Kate Smaje , Michael Chui , Ida Kristensen , and others, dives deep into the seven technologies that are already shaping the years to come.

What’s the future of AI?

What is AI (artificial intelligence)?

What is generative AI?

What is artificial general intelligence (AGI)?

What is deep learning?

What is prompt engineering?

What is machine learning?

What is tokenization?

Oxford Martin School logo

Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well

How ai gets built is currently decided by a small group of technologists. as this technology is transforming our lives, it should be in all of our interest to become informed and engaged..

Why should you care about the development of artificial intelligence?

Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world.

That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is becoming . If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.

To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful?

Why is it hard to take the prospect of a world transformed by artificial intelligence seriously?

In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again.

But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has become common that technologies unimaginable in one's youth become ordinary in later life.

This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world.

The second reason why it is difficult to take the possibility of transformative AI – potentially even AI as intelligent as humans – seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet. 1

But, it is plausible that it is both the stuff of sci-fi fantasy and the central invention that could arrive in our, or our children’s, lifetimes.

The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let’s look at both of them.

How to develop an idea of what the future of artificial intelligence might look like?

When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.

From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

The advantages and disadvantages of comparing machine and human intelligence

One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same range of intellectual tasks that we humans are capable of. 3 It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI. 4

Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.

The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.

However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.

Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems reached superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently. 5

These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways. 6

Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.

Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail. 7 No human would make the mistake of drawing a horse with five legs. 8

Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems “human-level.”

AI-generated image of a horse 9

A brown horse running in a grassy field. The horse appears to have five legs.

Transformative artificial intelligence is defined by the impact this technology would have on the world

In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves.

Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is “powerful enough to bring us into a new, qualitatively different future.” 10

In humanity’s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions.

Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of humanity’s future .

Technologies that fundamentally change how a wide range of goods or services are produced are called ‘general-purpose technologies’. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in this series on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology.

Timeline of the three transformative events in world history

future of ai technology essay

A future of human-level or transformative AI?

The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change. 11

The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison.

When and if AI systems might reach either of these levels is of course difficult to predict. In my companion article on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner.

What is at stake as artificial intelligence becomes more powerful?

All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide.

That the use of AI technology can cause harm is clear, because it is already happening.

AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. 12

But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021. 13

As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control.

The so-called alignment problem of AI is another extreme risk. This is the concern that nobody would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers. 14

How could an AI possibly escape human control and end up harming humans?

The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to do this. The risk is that we try to instruct the AI to pursue some specific goal – even a very worthwhile one – and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.

Can’t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee all the possible harmful unintended consequences. The alignment problem arises because of “the impossibility of defining true human purposes correctly and completely,” as AI researcher Stuart Russell puts it. 15

Can’t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can’t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won’t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does. 16

This risk – that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe – has been recognized right from the early days of AI research more than 70 years ago. 17 The very rapid development of AI in recent years has made a solution to this problem much more urgent.

I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book The Alignment Problem by Brian Christian and Benjamin Hilton’s article ‘Preventing an AI-related catastrophe’ .

If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to the many positive developments in humanity’s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare.

This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity – and the destruction of the same.

How can we make sure that the development of AI goes well?

Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement.

Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem. 18 Corporate AI investment in the same year was more than 2000-times larger, it summed up to $153 billion.

This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems.

It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each individual person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide good resources on what you can do concretely if you want to work on this problem.

I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake.

When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations.

If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the – or plausibly the – most powerful technology in human history will transform our world.

If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be.

With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in. You can find these resources on OurWorldinData.org/artificial-intelligence

Acknowledgements: I would like to thank my colleagues Daniel Bachler, Charlie Giattino, and Edouard Mathieu for their helpful comments to drafts of this essay.

This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any particular scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly.

Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See my article on this topic .

The fact that humans are capable of a range of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the Wikipedia entry on intelligence , for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of ‘human-level AI’.

There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in studies on AI timelines researchers offer more precise definitions of what human-level AI refers to in their particular study.

Peter Norvig and Stuart Russell (2021) — Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.

The AI system AlphaGo , and its various successors, won against Go masters. The AI system Pluribus beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) – ‘Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning’. In Science 0, no. 0 (22 November 2022): eade9097. https://doi.org/10.1126/science.ade9097 .

This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) –  Measuring Massive Multitask Language Understanding or the definition of what would qualify as artificial general intelligence in this Metaculus prediction .

An overview of how AI systems can fail can be found in Charles Choi – 7 Revealing Ways AIs Fail . It is also worth reading through the AIAAIC Repository which “details recent incidents and controversies driven by or relating to AI, algorithms, and automation."

I have taken this example from AI researcher François Chollet , who published it here .

Via François Chollet , who published it here . Based on Chollet’s comments it seems that this image was created by the AI system ‘Stable Diffusion’.

This quote is from Holden Karnofsky (2021) – AI Timelines: Where the Arguments, and the "Experts," Stand . For Holden Karnofsky’s earlier thinking on this conceptualization of AI see his 2016 article ‘Some Background on Our Views Regarding Advanced Artificial Intelligence’ .

Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited report on AI timelines she defines it as a change in software technology that brings the growth rate of gross world product "to 20%-30% per year". Several other researchers define TAI in similar terms.

Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer.

On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) – How to deal with AI-enabled disinformation . More generally on this topic see Brundage and Avin et al. (2018) – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at maliciousaireport.com . A starting point for literature and reporting on mass surveillance by governments is the relevant Wikipedia entry .

See for example the Wikipedia entry on the ‘Dutch childcare benefits scandal’ and Melissa Heikkilä (2022) – ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’ , in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian’s book The Alignment Problem and the reports of the AI Now Institute .

Overviews are provided in Stuart Russell (2019) – Human Compatible (especially chapter 5) and Brian Christian’s 2020 book The Alignment Problem . Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI – see OpenAI's article " Our approach to alignment research " from August 2022.

Stuart Russell (2019) – Human Compatible

A question that follows from this is, why build such a powerful AI in the first place?

The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it – the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI – countries around the world would have to agree and then find ways to actually implement it.

In 1950 the computer science pioneer Alan Turing put it like this: “If a machine can think, it might think more intelligently than we do, and then where should we be? … [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort, for I believe that no such bounds can be set.” Alan. M. Turing (1950) – Computing Machinery and Intelligence , In Mind, Volume LIX, Issue 236, October 1950, Pages 433–460.

Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” quoted from Norbert Wiener (1960) – Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science.

In 1950 – the same year in which Turing published the cited article – Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.”

Toby Ord – The Precipice . He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

  • Skip to main content
  • Keyboard shortcuts for audio player

What is AI and how will it change our lives? NPR Explains.

Danny Hajek at NPR West in Culver City, California, September 25, 2018. (photo by Allison Shelley)

Danny Hajek

Bobby Allyn

Bobby Allyn

Ashley Montgomery

future of ai technology essay

AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human. Yuichiro Chino hide caption

AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human.

Artificial intelligence is changing our lives – from education and politics to art and healthcare. The AI industry continues to develop at rapid pace. But what exactly is it? Should we be optimistic or worried about our future with this ever-evolving technology? Join host and tech reporter Bobby Allyn in NPR Explains: AI, a podcast series exclusively on the NPR App, which is available on the App Store or Google Play .

NPR Explains: AI answers your most pressing questions about artificial intelligence:

  • What is AI? - Artificial intelligence is a multi-billion dollar industry. Tons of AI tools are suddenly available to the public. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human. AI scientist Gary Marcus says there is no one definition of artificial intelligence. It's about building machines that do smart things. Listen here.
  • Can AI be regulated? - As technology gets better at faking reality, there are big questions about regulation. In the U.S., Congress has never been bold about regulating the tech industry and it's no different with the advancements in AI. Listen here.
  • Can AI replace creativity? - AI tools used to generate artwork can give users the chance to create stunning images. Language tools can generate poetry through algorithms. AI is blurring the lines of what it means to be an artist. Now, some artists are arguing that these AI models breach copyright law. Listen here.
  • Does AI have common sense? - Earlier this year, Microsoft's chatbot went rogue. It professed love to some users. It called people ugly. It spread false information. The chatbot's strange behavior brought up an interesting question: Does AI have common sense? Listen here.
  • How can AI help productivity? - From hiring practices to medical insurance paperwork, many big businesses are using AI to work faster and more efficiently. But that's raising urgent questions about discrimination and equity in the workplace. Listen here.
  • What are the dangers of AI? - Geoffrey Hinton, known as the "godfather of AI," spent decades advancing artificial intelligence. Now he says he believes the AI arms race among tech giants is actually a race towards danger. Listen here.

Learn more about artificial intelligence. Listen to NPR Explains: AI, a podcast series available exclusively in the NPR app. Download it on the App Store or Google Play .

Artificial Intelligence: History, Challenges, and Future Essay

In the editorial “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence” by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired, humanized AI, and artificial narrow, general, and superintelligent AI. They address the AI effect, which is the phenomenon in which observers disregard AI behavior by claiming that it does not represent true intelligence. The article also uses the analogy of the four seasons (spring, summer, fall, and winter) to describe the history of AI.

The article provides a useful overview of the history of AI and its current state. The authors provide a useful framework for understanding AI by dividing it into categories based on the types of intelligence it exhibits or its evolutionary stage. It addresses the concept of the AI effect, which is the phenomenon where observers disregard AI behavior by claiming that it does not represent true intelligence.

The central claim made by Michael Haenlein and Andreas Kaplan is that AI can be classified into different types based on the types of intelligence it exhibits or its evolutionary stage. The authors argue that AI has evolved significantly since its birth in the 1940s, but there have also been ups and downs in the field (Haenlein). The evidence used to support this claim is the historical overview of AI. The authors also discuss the current challenges faced by firms today and the future of AI. They make qualifications by acknowledging that only time will tell whether AI will reach Artificial General Intelligence and that early systems, such as expert systems had limitations. If one takes their claims to be true, it suggests that AI has the potential to transform various industries, but there may also be ethical and social implications to consider. Overall, the argument is well-supported with evidence, and the authors acknowledge the limitations of AI. As an AI language model, I cannot take a stance on whether the argument is persuasive, but it is an informative overview of the history and potential of AI.

The article can be beneficial for the research on the ethical and social implications of AI in society. It offers a historical overview of AI, and this can help me understand how AI has evolved and what developments have occurred in the field. Additionally, the article highlights the potential of AI and the challenges that firms face today, and this can help me understand the practical implications of AI. The authors also classify AI into three categories, and this can help me understand the types of AI that exist and how they can be used in different contexts.

The article raises several questions that I would like to explore further, such as the impact of AI on the workforce and job displacement. The article also provides a new framework for looking at AI, and this can help me understand the potential of AI and its implications for society. However, I do not disagree with the author’s ideas, and I do not see myself working against the ideas presented.

Personally, I find the topic of AI fascinating, and I believe that it has the potential to transform society in numerous ways. However, I also believe that we need to approach AI with caution and be mindful of its potential negative impacts. As the editorial suggests, we need to develop clear AI strategies and ensure that ethical considerations are taken into account. In this way, we can guarantee that the benefits of AI are maximized while minimizing its negative impacts.

Haenlein, Michael, and Andreas Kaplan. “ A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence .” California Management Review , vol. 61, no. 4, 2019, pp. 5–14, Web.

  • The Kaplan-Meier Method: NG’s Article
  • The Lost Leonardo Film by Andreas Koefoed
  • Visual Metaphors in Print Advertising for Fashion Products by Stuart Kaplan
  • Artificial Intelligence in the Field of Copywriting
  • Artificial Intelligence for Recruitment and Selection
  • Artificial Intelligence and Gamification in Hiring
  • Open-Source Intelligence and Deep Fakes
  • Artificial Intelligence and Frankenstein's Monster: Article Review
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, February 25). Artificial Intelligence: History, Challenges, and Future. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/

"Artificial Intelligence: History, Challenges, and Future." IvyPanda , 25 Feb. 2024, ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

IvyPanda . (2024) 'Artificial Intelligence: History, Challenges, and Future'. 25 February.

IvyPanda . 2024. "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

1. IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

Bibliography

IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

Advertisement

Advertisement

AI: the future of humanity

  • Open access
  • Published: 26 March 2024
  • Volume 4 , article number  25 , ( 2024 )

Cite this article

You have full access to this open access article

future of ai technology essay

  • Soha Rawas 1  

2908 Accesses

2 Altmetric

Explore all metrics

Artificial intelligence (AI) is reshaping humanity's future, and this manuscript provides a comprehensive exploration of its implications, applications, challenges, and opportunities. The revolutionary potential of AI is investigated across numerous sectors, with a focus on addressing global concerns. The influence of AI on areas such as healthcare, transportation, banking, and education is revealed through historical insights and conversations on different AI systems. Ethical considerations and the significance of responsible AI development are addressed. Furthermore, this study investigates AI's involvement in addressing global issues such as climate change, public health, and social justice. This paper serves as a resource for policymakers, researchers, and practitioners understanding the complex link between AI and humans.

Similar content being viewed by others

future of ai technology essay

Philosophical Review of Artificial Intelligence for Society 5.0

future of ai technology essay

Artificial Intelligence’s Black Box: Posing New Ethical and Legal Challenges on Modern Societies

future of ai technology essay

Adoption of Artificial Intelligence Technologies by Often Marginalized Populations

Avoid common mistakes on your manuscript.

1 Introduction

Artificial intelligence (AI) is at the cutting edge of technological development and has the potential to profoundly and incomparably influence humankind's future [ 1 ]. Understanding the consequences of AI is increasingly important as it develops and permeates more facets of society. The goal of this paper is to provide a comprehensive exploration of AI's transformative potential, applications, ethical considerations, challenges, and opportunities.

AI has rapidly advanced, and this progress has deep historical roots. AI has experienced important turning points and discoveries that have fueled its development from its early beginnings in the 1950s to the present [ 2 ]. These developments have sped up the process of developing artificial intelligence on par with that of humans, opening up new avenues for exploration.

AI comprises a wide range of techniques and technologies, including computer vision, deep learning, machine learning, and symbolic AI [ 3 ]. These technologies provide machines the ability to think like humans do by enabling them to perceive, analyze, learn, and make decisions. Understanding the intricacies of these AI systems and their underlying algorithms is essential to appreciate the immense potential they hold.

AI has a wide range of transformational applications that affect practically every aspect of our life. In healthcare, AI is revolutionizing medical diagnostics, enabling personalized treatments, and assisting in complex surgical procedures [ 4 ]. The transportation sector is witnessing the emergence of autonomous vehicles and intelligent traffic management systems, promising safer and more efficient mobility [ 5 ]. In finance and economics, AI is reshaping algorithmic trading, fraud detection, and economic forecasting, altering the dynamics of global markets [ 6 ]. Moreover, AI is transforming education by offering personalized learning experiences and intelligent tutoring systems, fostering individual growth and enhancing educational outcomes [ 7 ].

However, as AI proliferates, it brings with it ethical and societal implications that warrant careful examination. Concerns about job displacement and the future of work arise as automation and AI technologies increasingly replace human labor. Privacy and data security become paramount as AI relies on vast amounts of personal information. Issues of bias and fairness emerge as AI decision-making algorithms can inadvertently perpetuate discriminatory practices. Moreover, the impact of AI on human autonomy raises profound questions about the boundaries between human agency and technological influence [ 8 ].

The challenges and risks associated with AI should not be overlooked. The notion of superintelligence and its potential existential risks demand rigorous evaluation and proactive measures. Transparency and accountability in AI systems are imperative to ensure trust and prevent unintended consequences [ 9 ]. Addressing societal disparities, such as unemployment and socioeconomic inequalities exacerbated by AI, requires careful consideration and policy interventions [ 10 ]. Regulation and governance frameworks must be developed to guide the responsible development and deployment of AI technologies.

Despite these challenges, AI has tremendous potential for the future [ 11 ]. Collaboration between AI and human intelligence has the potential to lead to extraordinary improvements in human skills and the resolution of complicated issues. AI augmentation, in which humans and machines collaborate, has potential in a variety of fields, ranging from healthcare to scientific study. Explainable AI advancements promote transparency and trust, allowing for improved understanding and ethical decision-making. In addition, ethical principles and rules for AI research and governance serve as a road map for responsible AI practices.

The purpose of this article is to provide a thorough grasp of AI's revolutionary potential for humanity. We dive into the complicated interplay between AI and society by investigating its applications, ethical considerations, challenges, and opportunities. Through careful analysis and forward-thinking, we can leverage the power of AI to shape a future that is equitable, inclusive, and beneficial for all.

2 Methodology

2.1 research gap.

Despite the burgeoning literature on the societal implications of AI, a comprehensive investigation into the intricate interplay between AI's multifaceted impacts and the development of effective strategies to harness its potential remains relatively underexplored. While existing research delves into individual aspects of AI's influence, a holistic understanding of its far-reaching consequences and the actionable steps required for its responsible integration demands further exploration.

2.2 Study objectives

This study aims to address the aforementioned research gap by pursuing the following objectives:

Comprehensive impact assessment: To analyze and evaluate the multidimensional impact of artificial intelligence across diverse sectors, including healthcare, transportation, finance, and education. This involves investigating how AI applications are transforming industries and shaping societal dynamics.

Ethical and societal considerations: To critically examine the ethical and societal implications stemming from AI's proliferation, encompassing areas such as job displacement, privacy concerns, bias mitigation, and the delicate balance between human autonomy and technological influence.

Challenges and opportunities: To identify and elucidate the challenges and opportunities that accompany the widespread integration of AI technologies. This involves exploring potential risks and benefits, as well as the regulatory and governance frameworks required for ensuring responsible AI development.

Societal, economic, and entrepreneurial impact: To delve into the broader impact of AI on society, economy, and entrepreneurship, and to provide a thorough discussion and argument on the ways AI is shaping these domains. This includes considering how AI is altering business models, employment dynamics, economic growth, and innovative entrepreneurship.

Empirical exploration: To conduct a rigorous empirical exploration through data analysis, drawing from a comprehensive collection of relevant and reputable sources. This includes scholarly articles, reports, and established online platforms to establish a solid theoretical foundation.

By systematically addressing these objectives, this study seeks to shed light on the intricate relationship between artificial intelligence and its societal, ethical, and economic implications, providing valuable insights for policymakers, researchers, and practitioners alike.

3 Historical overview of Artificial Intelligence

3.1 origins of ai and its early development.

Artificial intelligence can be traced back to the early dreams of researchers and scientists who wanted to understand and duplicate human intellect in computers. The core concepts of AI were laid during the Dartmouth Conference in 1956, when John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon coined the name "Artificial Intelligence" and outlined the goal of building machines that could simulate human intelligence [ 12 ]. The early development of AI was focused on symbolic AI, which involves employing logical principles and symbolic representations to mimic human reasoning and problem-solving. Early AI systems, such as the Logic Theorist and the General Problem Solver, demonstrated the ability of machines to solve mathematical and logical issues. However, advancement in AI was hampered by the time's low computer capacity and the difficulties of encoding comprehensive human knowledge.

3.2 Key milestones in AI research and technological advancements

Over the decades, the field of AI has seen significant milestones and technological achievements [ 8 , 9 , 12 , 13 ]. AI researchers made significant advances in natural language processing and knowledge representation in the 1960s and 1970s, establishing the framework for language-based AI systems. These improvements resulted in the 1980s development of expert systems, which used rule-based algorithms to make choices in specific domains. Expert systems have found use in medical diagnosis, financial analysis, and industrial process control. IBM's Deep Blue defeated world chess champion Garry Kasparov in 1997, marking a watershed point in AI's ability to outperform human professionals in strategic thinking. This accomplishment demonstrated the effectiveness of brute-force computing and advanced algorithms in handling challenging tasks.

With the advent of machine learning and neural networks in the twenty-first century, AI research saw a paradigm change. The availability of large datasets and computer resources facilitated neural network training, resulting in advancements in domains such as speech recognition, image classification, and natural language understanding. Deep learning, a subtype of machine learning, transformed AI by allowing systems to create hierarchical representations from data, replicating human brain functions. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have sped up advances in computer vision and natural language processing. These advancements fueled the development of intelligent virtual assistants like Siri and Alexa, and enabled AI systems to outperform humans in picture recognition and language translation tasks.

3.3 Evolution of AI technologies and their impact on society

The advancement of AI technology has had a significant impact on a variety of societal areas. Automation powered by AI has revolutionized industries, streamlining processes and increasing efficiency. In manufacturing, robots and AI-powered systems have revolutionized assembly lines and enabled mass customization [ 3 ]. AI's presence in the healthcare sector has resulted in improved diagnostic accuracy, personalized treatment plans, and drug discovery. AI algorithms are now capable of detecting medical conditions from medical images with greater precision than human experts [ 2 ].

In finance and economics [ 6 ], AI-driven algorithms have revolutionized trading strategies, risk assessment, and fraud detection, influencing the dynamics of global markets. AI-powered recommendation systems have reshaped the entertainment and e-commerce industries, providing personalized content and product suggestions to consumers. The transportation sector is on the cusp of a revolution, with AI paving the way for self-driving vehicles, optimizing traffic management, and enabling intelligent transportation systems [ 5 ].

Despite its remarkable advancements, AI's expanding influence raises ethical, legal, and societal challenges. Concerns surrounding job displacement and the future of work have sparked discussions about reskilling the workforce and creating new job opportunities that complement AI-driven technologies. Ethical considerations around data privacy, transparency, and fairness in AI decision-making have become critical issues, prompting the need for robust regulations and ethical guidelines [ 9 ].

The responsible deployment of AI in critical domains, such as healthcare and autonomous vehicles, demands stringent safety measures and accountability to avoid potential harm to human lives. Additionally, addressing the issue of bias in AI algorithms is imperative to ensure equitable outcomes and promote societal trust [ 10 ].

Accordingly, the historical overview of AI reveals a fascinating journey of innovation, breakthroughs, and paradigm shifts. From its inception as a concept to the current era of deep learning and neural networks, AI has made remarkable strides, impacting various sectors and aspects of society. Understanding the historical context and technological advancements of AI is crucial in comprehending its present significance and envisioning its transformative potential for the future of humanity. Nonetheless, responsible development, ethical considerations, and collaboration between stakeholders will be essential in harnessing AI's power to benefit humanity while addressing its challenges.

4 Understanding Artificial Intelligence

4.1 definition and scope of ai.

AI is a multidisciplinary field that tries to develop intelligent agents capable of executing activities that would normally require human intelligence [ 12 ]. Reasoning, problem-solving, learning, perception, and language comprehension are examples of these tasks. AI aims to mimic human cognitive abilities by allowing robots to interpret data, make decisions, and adapt to new settings. AI has a wide range of applications, ranging from simple rule-based systems to powerful deep learning algorithms. While AI has made significant strides in various domains, achieving human-level intelligence, often referred to as Artificial General Intelligence (AGI), remains a formidable challenge.

4.2 Different types of AI systems

AI systems can be categorized into different types based on their approaches and methodologies. Symbolic AI [ 14 ], also known as rule-based AI, relies on predefined rules and logical reasoning to solve problems. Expert systems [ 15 ], which fall under symbolic AI, use a knowledge base and an inference engine to mimic the decision-making of human experts in specific domains. Another key category is machine learning [ 16 ], which enables AI systems to learn from data and improve their performance over time without explicit programming. Machine learning includes supervised learning, where the algorithm is trained on labeled data; unsupervised learning, where the algorithm learns patterns and structures from unlabeled data; and reinforcement learning, where the algorithm learns by interacting with an environment and receiving feedback in the form of rewards or penalties. Deep learning, a subset of machine learning, employs artificial neural networks with multiple layers to automatically learn hierarchical representations of data, leading to breakthroughs in computer vision, speech recognition, and natural language processing.

4.3 Fundamental concepts in AI

Neural Networks: Neural networks are computational models inspired by the structure and functioning of the human brain [ 17 ]. They consist of interconnected nodes, called neurons, organized in layers. Each neuron processes incoming data and applies an activation function to produce an output. Deep neural networks with many layers have revolutionized AI by enabling complex feature extraction and high-level abstractions from data.

Algorithms: AI algorithms govern the learning and decision-making processes of AI systems. These algorithms can be as simple as linear regression or as complex as convolutional neural networks [ 14 ]. The choice of algorithms is crucial in determining the performance and efficiency of AI applications.

Natural language processing (NLP): NLP enables AI systems to interact and understand human language [ 18 ]. NLP applications range from sentiment analysis and language translation to chatbots and virtual assistants. Advanced NLP models utilize deep learning techniques, such as Transformers, to process contextual information and improve language understanding.

4.4 Ethical considerations in AI development and deployment

The rapid advancement of AI raises ethical challenges that require careful consideration. One prominent concern is bias in AI algorithms [ 10 ], which can lead to unfair or discriminatory outcomes, especially in domains like hiring and criminal justice. Ensuring transparency and explainability in AI decision-making is essential to build trust and accountability. Privacy and data security are paramount, as AI systems often require large amounts of data to function effectively. Safeguarding personal information and preventing data breaches are critical aspects of responsible AI deployment. Additionally, the potential impact of AI on employment and societal dynamics necessitates thoughtful planning and policies to ensure a smooth transition and address potential workforce displacement.

Understanding Artificial Intelligence is fundamental to appreciating its vast potential and grappling with the ethical challenges it poses. AI's definition and scope encompass a wide range of tasks, from reasoning to language understanding. Different types of AI systems, such as symbolic AI, machine learning, and deep learning, provide diverse approaches to problem-solving and learning. Essential concepts in AI, like neural networks and algorithms, underpin its functionality and enable groundbreaking applications. However, ethical considerations in AI development and deployment are paramount to foster responsible AI implementation and ensure that AI benefits society equitably. By comprehensively understanding AI, we can navigate its evolving landscape with the utmost responsibility and strive to harness its capabilities for the greater good.

5 AI applications in various fields

AI's transformative impact extends across healthcare, transportation, finance, and education. This section explores these applications and addresses ethical considerations for responsible AI development and deployment. Figure  1 presents an overview of the wide-ranging applications of AI across various fields.

figure 1

AI applications in diverse fields

5.1 Healthcare

The use of AI in healthcare has heralded a new age of revolutionary advances, altering medical procedures and having a profound impact on patient care [ 2 ]. Machine learning algorithms are used in AI-powered medical diagnosis and treatment systems to assess massive volumes of patient data, such as medical records, imaging investigations, and genetic information [ 4 ]. These AI technologies can help healthcare personnel make more precise and fast diagnoses by comparing patient data with huge databases and patterns, resulting in earlier disease identification and more effective treatment strategies. Furthermore, AI's ability to process and interpret complex medical pictures, such as MRI and CT scans, has shown outstanding accuracy in detecting anomalies and assisting radiologists in spotting probable problems that the human eye may ignore [ 10 ].

Precision medicine, powered by AI, takes personalization to a new level by tailoring therapies to individual patients' genetic makeup, lifestyle, and medical history [ 19 ]. AI algorithms can offer individualized healthcare regimens that maximize treatment efficacy while minimizing adverse effects, resulting in improved patient outcomes and a higher quality of life.

AI-assisted robotic surgeries represent another milestone in healthcare AI applications. Advanced robotic systems, guided by AI algorithms, assist surgeons during surgical procedures by providing real-time insights, enhanced dexterity, and precision [ 20 ]. These AI-driven robotic assistants can make surgery less invasive, reducing trauma to patients, shortening recovery times, and minimizing the risk of complications. The integration of AI into surgical workflows has significantly raised the bar for surgical precision, resulting in superior patient care and expanded surgical capabilities.

5.2 Transportation

The transportation sector is undergoing a revolutionary transformation driven by AI applications. One of the most anticipated breakthroughs is the development of autonomous vehicles and self-driving technologies [ 5 ]. AI algorithms, together with advanced sensors and cameras, enable vehicles to navigate complex traffic environments autonomously. By continuously processing real-time data, AI-equipped self-driving cars can detect and respond to obstacles, traffic signals, and pedestrian movements, significantly reducing the likelihood of accidents caused by human errors. The potential impact of autonomous vehicles extends beyond enhancing road safety; it holds the promise of alleviating traffic congestion, optimizing energy consumption, and enabling seamless transportation for the elderly and disabled populations.

Intelligent traffic management systems powered by AI offer promising solutions to tackle traffic congestion and enhance overall transportation efficiency [ 21 ]. These AI systems can optimize traffic flow, identify congestion hotspots, and dynamically alter traffic signal timings to cut wait times by collecting data from numerous sources such as traffic cameras, GPS devices, and weather conditions. Smart traffic management has the potential to improve urban mobility while also lowering carbon emissions and promoting sustainable transportation.

AI is also important in optimizing logistics and transportation networks [ 22 ]. AI algorithms can optimize supply chain operations, cut transportation costs, and enhance delivery times by evaluating massive volumes of data on shipping routes, cargo loads, and transportation timetables. Furthermore, AI's predictive capabilities allow organizations to more efficiently forecast demand variations and plan inventory management, decreasing waste and improving overall operational efficiency.

5.3 Finance and economics

The impact of AI on the financial and economics sectors has been tremendous, with significant changes in established processes and the introduction of creative solutions [ 6 ]. Algorithmic trading powered by AI has transformed financial markets, enabling faster and more data-driven decision-making. Machine learning algorithms automatically evaluate market data, discover patterns, and execute trades, resulting in better investing strategies and more efficient capital allocation. AI-powered trading systems can react to market movements and quickly adjust trading positions, improving trading results and portfolio performance.

AI's contribution to risk assessment and fraud detection in the financial sector has been critical in guaranteeing the security and integrity of financial transactions [ 23 ]. In real-time, machine learning algorithms may evaluate historical transaction data, find aberrant trends, and flag potentially fraudulent actions. By continuously learning from new data, these AI systems can react to evolving fraud tendencies and increase the resilience of financial institutions against fraudulent threats.

With the incorporation of AI technology, economic forecasting and predictive analytics have also seen considerable breakthroughs [ 24 ]. To provide more accurate forecasts and insights, AI-powered models may process large and diverse datasets such as economic indicators, consumer behavior, and macroeconomic factors. AI-driven economic projections can help policymakers and businesses make educated decisions, plan resource allocation, and adapt proactively to changing economic situations, resulting in more stable and resilient economies.

5.4 Education

AI is altering the educational landscape by bringing creative solutions to improve student learning experiences and outcomes [ 7 , 9 ]. Artificial intelligence-based adaptive learning systems use data analytics and machine learning algorithms to assess individual students' strengths and weaknesses in real time. Adaptive learning platforms generate tailored learning pathways by adapting instructional content to each student's unique learning pace and preferences, increasing engagement and information retention. Targeted interventions, interactive courses, and timely feedback can help students improve their academic performance and gain a deeper grasp of subjects.

Intelligent teaching systems are yet another advancement in educational AI [ 25 ]. These systems use natural language processing and machine learning to provide students with tailored teaching and support. Intelligent tutoring systems, which can recognize and respond to students' inquiries and learning demands, provide personalised advice, promote self-directed learning, and reinforce concepts through interactive exercises. This individualized learning experience not only improves students' academic performance, but it also instills confidence and motivation to pursue interests further.

AI is also important in measuring learning outcomes and educational analytics [ 26 ]. AI algorithms can provide significant insights into learning patterns, instructional efficacy, and curriculum design by evaluating massive amounts of educational data, including student performance indicators and assessment results. These data-driven insights can be used by educational institutions and policymakers to optimize educational programs, identify areas for development, and create evidence-based policies that encourage improved educational results.

AI applications in healthcare, transportation, finance, and education have fundamentally altered their respective fields, pushing the limits of what is possible.

6 Ethical and societal implications of AI

This section investigates the ethical and societal consequences of artificial intelligence. Figure  2 depicts an in-depth examination of the ethical and societal ramifications of AI. This graphic depicts the primary areas of influence, which include employment, privacy, fairness, and human autonomy. Understanding these ramifications is critical for navigating the appropriate development and deployment of AI technology, assuring an ethical and societally beneficial future.

figure 2

Ethical and societal implications of AI

6.1 Impact on employment and workforce

Concerns have been raised concerning the influence of AI technologies on jobs and the workforce as they have become more widely adopted. Certain work roles may be vulnerable to displacement as AI-driven automation becomes more ubiquitous, potentially leading to unemployment and economic instability [ 27 , 28 ]. Routine and repetitive tasks are especially prone to automation, potentially harming industries including manufacturing, customer service, and data input. Furthermore, AI's ability to analyze massive amounts of data and execute complicated tasks may replace certain specialized positions, such as data analysis and pattern recognition, contributing to labor displacement [ 41 ]. To solve this challenge, proactive measures are required to reskill and upskill the workforce for the AI era. Investing in education and training programs that equip employees with AI-related skills such as data analysis, programming, and problem-solving will allow easier job transitions and foster a more adaptable and resilient labor market. Governments, businesses, and educational institutions must collaborate to develop comprehensive policies and initiatives that prepare individuals for the changing job landscape and ensure that the benefits of AI are distributed equitably across society.

6.2 Privacy, security, and data ethics

The increasing reliance on AI systems, particularly those that utilize vast amounts of personal data, raises critical ethical considerations related to privacy and data ethics [ 29 ]. The responsible and ethical use of data becomes paramount, requiring organizations to ensure informed consent, data anonymization, and stringent data protection measures. The misuse or unauthorized access to personal data by AI systems poses significant risks to individuals' privacy and can lead to various forms of exploitation, such as identity theft and targeted advertising. Furthermore, if AI technologies are not adequately regulated, they may intensify surveillance issues, potentially resulting in infringement of civil liberties and private rights [ 42 ]. To prevent these threats, legislators must enact strong data protection legislation and ethical norms that regulate AI systems' collection, storage, and use of personal data. Transparency and accountability in AI development and deployment are critical for establishing public trust and guaranteeing responsible data management.

6.3 Bias, fairness, and transparency in AI systems

AI systems are only as unbiased as the data on which they are trained, and inherent biases in the data might result in biased AI decision-making [ 30 ]. Algorithmic bias can lead to unequal treatment and discrimination, sustaining societal imbalances and strengthening preexisting prejudices. To address algorithmic prejudice, thorough data curation is required, as is diversity in data representation, as well as constant monitoring and evaluation of AI systems for any biases. Furthermore, guaranteeing justice and openness in AI decision-making is critical for increasing public trust in AI systems. AI systems must be built to provide explicit explanations for their judgments, allowing users to comprehend the logic underlying AI-generated outcomes. In order to encourage transparency and accountability, AI developers should share the criteria and data utilized in constructing AI models.

6.4 AI and human autonomy

As AI technologies advance, they have the potential to influence human autonomy and decision-making [ 31 ]. AI-powered recommendation systems, personalized marketing, and social media algorithms may impact human behavior, preferences, and views, creating ethical concerns about individual manipulation and persuasion. In the design and deployment of AI systems, striking a balance between improving user experiences and protecting human agency becomes crucial [ 43 ]. Policymakers and technologists must consider the ethical implications of AI-driven persuasion and manipulation and implement safeguards to protect individuals from undue influence. Additionally, AI developers should adopt ethical guidelines that prioritize human autonomy and empower users to make informed choices and maintain control over their digital experiences.

Accordingly, as AI technologies continue to advance and permeate various aspects of society, addressing the ethical and societal implications of AI becomes paramount. The impact of AI on employment and the workforce necessitates proactive efforts to reskill and upskill individuals, ensuring that the benefits of AI are shared inclusively. Privacy, security, and data ethics demand responsible data handling and robust regulations to safeguard individuals' personal information [ 44 ]. Addressing bias, ensuring fairness and transparency, and preserving human autonomy are crucial in building trust and fostering the responsible development and deployment of AI technologies. By navigating these ethical challenges thoughtfully and collaboratively, we can harness the potential of AI to shape a future that prioritizes human well-being and societal values.

7 Challenges, risks, and regulation of Artificial Intelligence

Section 7 discusses the challenges, risks, and regulation of AI. It explores an overview concerns related to superintelligence, transparency, unemployment, and ethical considerations. Understanding these complexities is vital for guiding responsible AI development and governance.

7.1 Superintelligence and existential risks

As AI technologies advance, the prospect of creating Artificial General Intelligence (AGI) or superintelligent systems raises existential risks [ 32 ]. Superintelligence refers to AI systems that surpass human intelligence across all domains, potentially leading to unforeseen and uncontrollable consequences. To avoid disastrous outcomes, it is vital that AGI is developed with rigorous safety mechanisms and is linked with human values. The fear is that AGI will outpace human comprehension and control, resulting in unanticipated acts or decisions with far-reaching and irreversible repercussions. To solve this, researchers and governments must engage in AGI safety research and form worldwide partnerships to construct governance structures that prioritize the safe and responsible development of AGI.

7.2 Lack of transparency and accountability in AI systems

One of the major issues in AI is the lack of transparency and accountability in the decision-making processes of AI systems [ 30 ]. Complex AI systems, such as deep neural networks, can be difficult to analyze and explain, giving rise to the "black box" AI problem [ 16 ]. This lack of transparency raises worries about possible biases, errors, or discriminatory effects from AI judgments. Researchers and developers must focus on constructing interpretable AI models that can provide explicit explanations for their actions in order to establish confidence and ensure the responsible usage of AI. Furthermore, building accountability frameworks that hold businesses and developers accountable for AI system outcomes is critical in addressing potential legal and ethical repercussions.

7.3 Unemployment, socioeconomic disparities, and the future of work

The rapid deployment of AI-driven automation has ramifications for employment and social inequities. As AI replaces certain job roles and tasks, there is a possibility of job displacement, leading to unemployment and income inequality [ 28 ]. Low-skilled workers in industries highly susceptible to automation may face the most significant challenges in transitioning to new job opportunities. Addressing these challenges requires a multi-faceted approach, including retraining and upskilling programs, social safety nets, and policies that promote job creation in emerging AI-related sectors. Additionally, measures such as universal basic income and shorter workweeks have been proposed to alleviate the potential socioeconomic impact of AI-driven automation on the workforce.

7.4 Ethical, legal, and regulatory considerations for AI development and deployment

The rapid advancement of AI technologies has outpaced the development of comprehensive ethical, legal, and regulatory frameworks [ 33 ]. Ensuring that AI is developed and deployed responsibly and ethically is crucial to avoid potential harm to individuals and society at large. Ethical considerations include addressing algorithmic bias, ensuring fairness, and safeguarding privacy and data rights. Legal and regulatory considerations encompass liability issues, data protection laws, and intellectual property rights related to AI systems. The need for international cooperation in formulating AI governance frameworks is paramount, as AI's impact transcends national boundaries. Policymakers, industry stakeholders, and experts must work collaboratively to establish guidelines and standards that promote the ethical development and use of AI technologies while striking a balance between innovation and protecting the common good.

In conclusion, while AI technologies hold immense promise, they also present significant challenges and risks that must be addressed proactively and responsibly. Superintelligence and existential risks demand focused research and governance to ensure AGI development is aligned with human values. The lack of transparency and accountability in AI systems necessitates efforts to create interpretable and accountable AI models. The potential impact of AI-driven automation on employment and socioeconomic disparities requires comprehensive policies and safety nets to support workforce transitions. Ethical, legal, and regulatory considerations are vital in fostering the responsible development and deployment of AI while balancing innovation with societal well-being. By addressing these challenges and risks collectively, we can harness the transformative potential of AI while safeguarding the welfare of humanity.

8 Opportunities and future directions

8.1 collaborative intelligence: human–ai collaboration.

The future of AI lies in collaborative intelligence, where humans and AI systems work together synergistically to achieve outcomes that neither could achieve alone [ 34 ]. Human-AI collaboration has the potential to revolutionize various fields, from healthcare and education to scientific research and creative endeavors. By combining human creativity, intuition, and empathy with AI's computational power, data analysis, and pattern recognition, we can tackle complex challenges more effectively. Collaborative intelligence enables AI systems to assist humans in decision-making, provide contextually relevant information, and augment human capabilities in problem-solving and innovation. However, realizing the full potential of collaborative intelligence requires addressing human-AI interaction challenges, ensuring seamless communication, and fostering a human-centric approach to AI system design.

8.2 Augmentation and amplification of human capabilities with AI

The role of AI in the future is not to replace people, but to maximize human potential. AI technology, through augmentation and amplification, can enable humans to thrive in their fields, whether in healthcare, creativity, or professional activities [ 35 ]. AI-powered technologies can let professionals focus on higher-level jobs that involve human creativity, empathy, and critical thinking by streamlining workflows, automating repetitive operations, and providing real-time insights. Furthermore, AI-powered personalized learning and adaptive tutoring systems may tailor to individual learning demands, allowing students and lifelong learners to reach their full potential. Augmenting human talents with AI creates a symbiotic connection in which AI acts as a necessary tool that complements human expertise, resulting in greater productivity, creativity, and overall well-being.

8.3 Explainable AI: advancements in interpretability and trustworthiness

To overcome the "black box" aspect of large AI algorithms, explainable AI is a vital area of research and development. As AI systems grow more common, it is critical to understand how they make judgments and make predictions. Advances in interpretability approaches enable AI to provide unambiguous explanations for its thinking, increasing the transparency, trustworthiness, and accountability of AI systems [ 36 ]. Not only can explainable AI increase user trust, but it also allows subject experts to assess AI-generated outputs and uncover potential biases or inaccuracies. Researchers are investigating novel ways for improving the explainability of AI systems while preserving high performance, such as interpretable machine learning models and transparent AI algorithms. We can bridge the gap between AI's capabilities and human understanding by creating explainable AI, making AI more accessible and helpful across a wide range of applications.

8.4 Ethical frameworks and guidelines for AI development and governance

The future of AI necessitates strong ethical frameworks and norms that value human well-being, fairness, and transparency [ 37 ]. Establishing thorough ethical guidelines is critical for navigating the ethical issues of AI, such as algorithmic bias, privacy problems, and the influence of AI on society. Policymakers, industry leaders, and researchers must collaborate to create AI systems that conform to ethical principles while respecting human rights and values. Furthermore, global cooperation is critical for addressing cross-border ethical quandaries and ensuring a consistent approach to AI regulation. To set norms that safeguard individuals, promote societal good, and prevent AI exploitation, ethical AI development necessitates a multi-stakeholder approach encompassing academia, industry, governments, and civil society. Furthermore, accountability frameworks that hold businesses accountable for the acts and consequences of their AI systems are critical in creating trust and responsible AI implementation.

The future of AI is full of potential to make breakthrough advances that benefit humanity. Collaborative intelligence, in which humans and AI systems collaborate, has potential for addressing challenging challenges and achieving breakthroughs across multiple areas. AI can help humans achieve unprecedented levels of efficiency and creativity. Advances in explainable AI will increase openness and trust, allowing for the responsible integration of AI into key applications. However, realizing this vision requires a strong foundation of ethical principles and norms to guarantee AI is created and deployed ethically, with human welfare at its core. By embracing these opportunities and adopting a human-centric approach, we can design a future in which AI serves as a powerful tool for positive change while respecting the values and principles that characterize our shared humanity.

9 AI and global challenges

9.1 climate change and environmental sustainability.

The use of AI technology to climate change and environmental sustainability opens up new avenues for addressing some of the world's most critical issues. AI's data processing and pattern recognition capabilities make it a strong tool for climate modeling and prediction. Artificial intelligence-powered climate models can examine massive amounts of environmental data, such as temperature records, carbon emissions, and weather patterns, to produce more accurate and actionable predictions of climate change impacts [ 38 ]. Furthermore, AI has the potential to optimize energy usage and resource management, thereby contributing to a more sustainable future. AI-powered systems can assess energy use trends, detect inefficiencies, and offer energy conservation and renewable energy integration options. Furthermore, AI-enabled solutions, such as autonomous drones for environmental monitoring and analysis, can help with environmental conservation efforts by monitoring deforestation, wildlife habitats, and illegal poaching activities, allowing for more effective conservation strategies and the protection of biodiversity.

9.2 Public health and pandemic response

The ongoing COVID-19 pandemic has emphasized the potential of artificial intelligence in public health and pandemic response. AI-based techniques for early diagnosis and control of infectious diseases are critical in preventing outbreaks from spreading. AI algorithms may evaluate a wide range of data sources, including social media, medical records, and mobility patterns, to detect early indicators of disease outbreaks and pinpoint high-risk locations for targeted interventions [ 39 ]. Furthermore, AI-driven vaccine development and distribution strategies can speed up the vaccine discovery process and optimize vaccine distribution based on parameters such as population density and vulnerability. The power of AI to analyze massive amounts of healthcare data can lead to better public health decisions and resource allocation. AI models, for example, may predict disease patterns, identify high-risk population groups, and optimize healthcare supply chain operations to ensure timely and efficient delivery of medicinal supplies.

9.3 Social justice and equity

AI has the ability to play a critical role in advancing social justice and equity by tackling systemic biases and inequalities. AI applications can be used to discover and correct biases in domains such as criminal justice, recruiting processes, and resource allocation. By harnessing AI's data-driven insights, governments and institutions can create evidence-based policies that minimize discrimination and enhance outcomes for underrepresented people [ 40 ]. When employing AI for social justice, ethical considerations are crucial because critical decisions affecting people's lives are involved. To guarantee that AI technologies have a beneficial impact, they must be developed and used in a transparent, fair, and accountable manner. Furthermore, AI can be used to encourage inclusivity and diversity in decision-making processes. Organizations may build more fair policies and foster a more inclusive society by utilizing AI algorithms that examine multiple perspectives and prioritize representation.

AI's new contribution to global concerns is a transformative chance to address humanity's most critical issues. In the fight against climate change, artificial intelligence (AI) can provide vital insights for better decision-making, optimize resource management, and aid in environmental conservation efforts. AI-powered solutions in public health can increase early identification of infectious diseases, speed up vaccine research, and improve healthcare data analysis for better public health outcomes. Furthermore, AI has the ability to promote social justice and equity by eliminating biases, increasing transparency, and utilizing technology for inclusivity and diversity. As we use AI to address global concerns, it is critical that we approach its development and deployment responsibly, ensuring that the advantages of AI are dispersed equally and line with the ideals and ambitions of a better, more sustainable world.

10 Conclusion

10.1 recapitulation of key points and contributions.

In this paper, we looked at the multidimensional environment of AI and its profound impact on humanity. We began by reviewing the historical evolution of AI, from its origins to the current state of cutting-edge technologies. The key types of AI systems, including symbolic AI, machine learning, and deep learning, were elucidated, along with their fundamental concepts like neural networks and algorithms. We identified AI's potential to revolutionize various fields, including healthcare, transportation, finance, and education, with applications ranging from medical diagnosis and autonomous vehicles to algorithmic trading and personalized learning. We highlighted AI's ethical implications, including concerns related to bias, fairness, transparency, and human autonomy.

10.2 Discussion of the transformative potential of AI for humanity

Throughout this work, it became clear that AI has enormous revolutionary potential for humanity. AI has already demonstrated its ability to improve medical diagnosis, optimize transportation, enhance financial decision-making, and revolutionize education. Collaborative intelligence between humans and AI opens new frontiers, amplifying human capabilities and fostering creativity and innovation. Furthermore, AI can contribute significantly to solving global challenges, including climate change, public health, and social justice, through climate modeling, early disease detection, and reducing bias in decision-making. The transformative potential of AI lies in its capacity to augment human abilities, foster data-driven decision-making, and address critical societal challenges.

10.3 Implications for policymakers, researchers, and practitioners

The advent of AI brings forth profound implications for policymakers, researchers, and practitioners. Policymakers must proactively address AI's ethical, legal, and societal implications, crafting comprehensive regulations and guidelines that protect individual rights and promote equitable access to AI-driven innovations. Researchers bear the responsibility of developing AI technologies that prioritize transparency, interpretability, and fairness to ensure that AI aligns with human values and is accountable for its decisions. For practitioners, the responsible and ethical deployment of AI is paramount, ensuring that AI systems are designed to benefit individuals and society at large, with a focus on inclusivity and addressing biases.

10.4 Directions for future research and responsible AI development

As AI continues to advance, future research should prioritize several key areas. AI safety and explainability must be at the forefront, ensuring that AI systems are transparent, interpretable, and accountable. Additionally, addressing AI's impact on employment and the workforce requires research into effective reskilling and upskilling programs to support individuals in the AI-driven economy. Ethical AI development should be ingrained into research and industry practices, promoting fairness, inclusivity, and the avoidance of harmful consequences. Collaboration and international cooperation are vital to develop responsible AI frameworks that transcend geographical boundaries and address global challenges.

AI stands at the threshold of reshaping humanity's future. Its transformative potential to revolutionize industries, address global challenges, and augment human capabilities holds great promise. However, realizing this potential requires a concerted effort from policymakers, researchers, and practitioners to navigate the ethical challenges, foster collaboration, and ensure AI benefits humanity equitably. As we embark on this AI-driven journey, responsible development, and the pursuit of innovation in alignment with human values will lead us to a future where AI enhances human life, enriches society, and promotes a more sustainable and equitable world.

Dat availability

Not applicable.

Järvelä S, Nguyen A, Hadwin A. Human and artificial intelligence collaboration for socially shared regulation in learning. Br J Educ Technol. 2023;54(5):1057–76. https://doi.org/10.1111/bjet.13325 .

Article   Google Scholar  

Mann DL. Artificial intelligence discusses the role of artificial intelligence in translational medicine: a JACC: basic to translational science interview with ChatGPT. Basic Transl Sci. 2023;8(2):221–3.

Google Scholar  

Vrontis D, et al. Artificial intelligence, robotics, advanced technologies and human resource management: a systematic review. Int J Hum Resour Manag. 2022;33(6):1237–66.

Beets B, et al. Surveying public perceptions of artificial intelligence in health care in the United States: systematic review. J Med Internet Res. 2023;25: e40337.

Nwakanma CI, et al. Explainable artificial intelligence (xai) for intrusion detection and mitigation in intelligent connected vehicles: a review. Appl Sci. 2023;13(3):1252.

Chang L, Taghizadeh-Hesary F, Mohsin M. Role of artificial intelligence on green economic development: Joint determinates of natural resources and green total factor productivity. Resour Policy. 2023;82: 103508.

Gašević D, Siemens G, Sadiq S. Empowering learners for the age of artificial intelligence. Comput Educ Artif Intell. 2023;4: 100130.

Stahl BC et al. A systematic review of artificial intelligence impact assessments. Artif Intell Rev. 2023;56(11)12799–831.

Memarian B, Doleck T. Fairness, accountability, transparency, and ethics (FATE) in Artificial Intelligence (AI) and higher education: asystematic review. Comput Educ Artif Intell. 2023;5:100152. https://doi.org/10.1016/j.caeai.2023.100152 .

Chen Y, et al. Human-centered design to address biases in artificial intelligence. J Med Internet Res. 2023;25:e43251. https://doi.org/10.2196/43251 .

Kopalle PK, et al. Examining artificial intelligence (AI) technologies in marketing via a global lens: current trends and future research opportunities. Int J Res Market. 2022;39(2):522–40. https://doi.org/10.1016/j.ijresmar.2021.11.002 .

Haenlein M, Kaplan A. A brief history of artificial intelligence: on the past, present, and future of artificial intelligence. Calif Manage Rev. 2019;61(4):5–14.

Jiang Y, et al. Quo vadis artificial intelligence? Discov Artif Intell. 2022;2(1):4.

Hitzler P, Sarker MK, editors. Neuro-symbolic artificial intelligence: the state of the art. 2022.

Žarković M, Stojković Z. Analysis of artificial intelligence expert systems for power transformer condition monitoring and diagnostics. Electric Power Syst Res. 2017;149:125–36.

Soori M, Arezoo B, Dastres R. Artificial intelligence, machine learning and deep learning in advanced robotics: a review. Cogn Robot. 2023.

Yamazaki K, et al. Spiking neural networks and their applications: a review. Brain Sci. 2022;12(7):863.

Suen H-Y, Hung K-E. Revealing the influence of AI and its interfaces on job candidates’ honest and deceptive impression management in asynchronous video interviews. Technol Forecast Soc Chang. 2024;198: 123011.

Abdelhalim H, et al. Artificial intelligence, healthcare, clinical genomics, and pharmacogenomics approaches in precision medicine. Front Genet. 2022;13: 929736.

Manickam P, et al. Artificial intelligence (AI) and internet of medical things (IoMT) assisted biomedical systems for intelligent healthcare. Biosensors. 2022;12(8):562.

Modi Y, et al. A comprehensive review on intelligent traffic management using machine learning algorithms. Innov Infrastruct Solut. 2022;7(1):128.

Olugbade S, et al. A review of artificial intelligence and machine learning for incident detectors in road transport systems. Math Comput Appl. 2022;27(5):77.

Herrmann H, Masawi B. Three and a half decades of artificial intelligence in banking, financial services, and insurance: a systematic evolutionary review. Strateg Chang. 2022;31(6):549–69.

Himeur Y, et al. AI-big data analytics for building automation and management systems: a survey, actual challenges and future perspectives. Artif Intell Rev. 2023;56(6):4929–5021.

Wang H et al. Examining the applications of intelligent tutoring systems in real educational contexts: a systematic literature review from the social experiment perspective. Educ Inf Technol . 2023;28(7):9113–48.

Salas-Pilco SZ, Xiao K, Xinyun H. Artificial intelligence and learning analytics in teacher education: a systematic review. Educ Sci. 2022;12(8):569. https://doi.org/10.3390/educsci12080569 .

Yang C-H. How artificial intelligence technology affects productivity and employment: firm-level evidence from taiwan. Res Policy. 2022;51(6): 104536.

Gupta KK. The impact of Artificial Intelligence on the job market and workforce. 2023.

Huang L. Ethics of artificial intelligence in education: student privacy and data protection. Sci Insights Educ Front. 2023;16(2):2577–87.

Stine AA-K, Kavak H. Bias, fairness, and assurance in AI: overview and synthesis. AI Assurance. 2023. https://doi.org/10.1016/B978-0-32-391919-7.00016-0 .

Compagnucci MC, et al editors. AI in EHealth: human autonomy, data governance and privacy in healthcare. Cambridge: Cambridge University Press; 2022.

Bucknall BS, Dori-Hacohen S. Current and near-term AI as a potential existential risk factor. Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society. 2022.

Čartolovni A, Tomičić A, Mosler EL. Ethical, legal, and social considerations of AI-based medical decision-support tools: a scoping review. Int J Med Inform. 2022;161: 104738.

Gupta P et al. Fostering collective intelligence in human–AI collaboration: laying the groundwork for COHUMAIN. Top Cogn Sci. 2023.

Duin AH, Pedersen I. Augmentation technologies and artificial intelligence in technical communication: designing ethical futures. Milton Park: Taylor & Francis; 2023.

Book   Google Scholar  

Rehman A, Farrakh A. Improving clinical decision support systems: explainable AI for enhanced disease prediction in healthcare. Int J Comput Innov Sci. 2023;2(2):9–23.

Almeida V, Mendes LS, Doneda D. On the development of AI governance frameworks. IEEE Internet Comput. 2023;27(1):70–4. https://doi.org/10.1109/MIC.2022.3186030 .

Habila MA, Ouladsmane M, Alothman ZA. Role of artificial intelligence in environmental sustainability. Visualization techniques for climate change with machine learning and Artificial Intelligence. Elsevier, 2023;449–69.

MacIntyre CR, et al. Artificial intelligence in public health: the potential of epidemic early warning systems. J Int Med Res. 2023;51(3):030006052311593. https://doi.org/10.1177/03000605231159335 .

Article   MathSciNet   Google Scholar  

Lim D. AI, equity, and the IP Gap. SMU L Rev. 2022;75:815.

Wu C, et al. Natural language processing for smart construction: current status and future directions. Autom Construct. 2022;134: 104059.

Rawas S. ChatGPT: empowering lifelong learning in the digital age of higher education. Educ Inf Technol. 2023. https://doi.org/10.1007/s10639-023-12114-8 .

Samala AD, Rawas S. Generative AI as virtual healthcare assistant for enhancing patient care quality. Int J Online Biomed Eng (iJOE). 2024;20(05):174–87.

Samala AD, Rawas S. Transforming healthcare data management: a blockchain-based cloud EHR system for enhanced security and interoperability. Int J Online Biomed Eng (iJOE). 2024;20(02):46–60. https://doi.org/10.3991/ijoe.v20i02.45693 .

Download references

Acknowledgements

This work was not supported by any funding agency or grant.

Author information

Authors and affiliations.

Faculty of Science, Department of Mathematics and Computer Science, Beirut Arab University, Beirut, Lebanon

You can also search for this author in PubMed   Google Scholar

Contributions

The only one main and corresponding author conducted all aspects of the research presented in this paper and wrote the manuscript.

Corresponding author

Correspondence to Soha Rawas .

Ethics declarations

Ethics approval and consent to participate.

This study was exempt from ethics approval because it did not involve human or animal subjects. The data used in this study were publicly available and did not require informed consent from participants.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Rawas, S. AI: the future of humanity. Discov Artif Intell 4 , 25 (2024). https://doi.org/10.1007/s44163-024-00118-3

Download citation

Received : 19 October 2023

Accepted : 18 March 2024

Published : 26 March 2024

DOI : https://doi.org/10.1007/s44163-024-00118-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial Intelligence
  • Future of humanity
  • Applications of AI
  • Ethical implications
  • Challenges and risks
  • Global challenges
  • Find a journal
  • Publish with us
  • Track your research

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

Table of contents.

  • 1. Concerns about human agency, evolution and survival
  • 2. Solutions to address AI’s anticipated negative impacts
  • 3. Improvements ahead: How humans and AI might evolve together in the next decade
  • About this canvassing of experts
  • Acknowledgments

Table that shows that people in most of the surveyed countries are more willing to discuss politics in person than via digital channels.

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

[chart id=”21972″]

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson

[machine learning]

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Technology Adoption

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

Many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, striking findings from 2023, most popular, report materials.

  • Shareable quotes from experts about artificial intelligence and the future of humans

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

future of ai technology essay

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

Logo

Essay on Future of Artificial Intelligence

Students are often asked to write an essay on Future of Artificial Intelligence in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Future of Artificial Intelligence

Introduction.

Artificial Intelligence (AI) is the science of making machines think and learn like humans. It’s an exciting field that’s rapidly changing our world.

Future Possibilities

In the future, AI could take over many jobs, making our lives easier. Robots could clean our houses, and AI could help doctors diagnose diseases.

Challenges Ahead

However, there are challenges. We need to make sure AI is used responsibly, and that it doesn’t take away too many jobs.

The future of AI is promising, but we need to navigate it carefully to ensure it benefits everyone.

250 Words Essay on Future of Artificial Intelligence

Artificial Intelligence (AI) has become an integral part of our daily lives, from smartphones to autonomous vehicles. The future of AI is a topic of intense debate and speculation among scientists, technologists, and futurists.

AI in Everyday Life

The future of AI holds promising advancements in everyday life. We can expect more sophisticated personal assistants, smarter home automation, and advanced healthcare systems. AI will continue to streamline our lives, making mundane tasks more efficient.

AI in Business

In business, AI will revolutionize industries by automating processes and creating new business models. Predictive analytics, customer service, and supply chain management will become more efficient and accurate. AI will also enable personalized marketing, enhancing customer experience and retention.

AI in Ethics and Society

However, the future of AI also poses ethical and societal challenges. Issues such as job displacement due to automation, privacy concerns, and the potential misuse of AI technologies need to be addressed. Ensuring fairness, transparency, and accountability in AI systems will be crucial.

In conclusion, the future of AI is a blend of immense potential and challenges. It will transform our lives and businesses, but also necessitates careful consideration of ethical and societal implications. As we move forward, it is essential to foster a global dialogue about the responsible use and governance of AI.

500 Words Essay on Future of Artificial Intelligence

Artificial Intelligence (AI) has transformed from a fringe scientific concept into a commonplace technology, permeating every aspect of our lives. As we stand on the precipice of the future, it becomes crucial to understand AI’s potential trajectory and the profound implications it might have on society.

The Evolution of AI

The future of AI is rooted in its evolution. Initially, AI was about rule-based systems, where machines were programmed to perform specific tasks. However, the advent of Machine Learning (ML) marked a significant shift. ML enabled machines to learn from data and improve their performance over time, leading to more sophisticated AI models.

The current focus is on developing General AI, machines that can perform any intellectual task that a human being can. While we are yet to achieve this, advancements in Deep Learning and Neural Networks are bringing us closer to this reality.

AI in the Future

In the future, AI is expected to become more autonomous and integrated into our daily lives. We will see AI systems that can not only understand and learn from their environment but also make complex decisions, solve problems, and even exhibit creativity.

One of the most promising areas is AI’s role in data analysis. As data continues to grow exponentially, AI will become indispensable in making sense of this information, leading to breakthroughs in fields like healthcare, climate change, and social sciences.

Implications and Challenges

However, the future of AI is not without its challenges. As AI systems become more autonomous, we must grapple with ethical issues. For instance, who is accountable if an AI system makes a mistake? How do we ensure that AI systems are fair and unbiased?

Moreover, as AI continues to automate tasks, there are concerns about job displacement. While AI will undoubtedly create new jobs, it will also render many existing jobs obsolete. Therefore, societies must prepare for this transition by investing in education and training.

The future of AI is a landscape of immense potential and challenges. As we continue to develop more sophisticated AI systems, we must also be mindful of the ethical implications and societal impacts. By doing so, we can harness the power of AI to create a future where technology serves humanity, rather than the other way around.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

  • Essay on Indian Economics Book
  • Essay on Sports and Health
  • Essay on Benefits of Sports

Apart from these, you can look at all the essays by clicking here .

Happy studying!

I really appreciate your efforts 👍 This is the superb website to get the easiest essay on any topic.

I really appriciate your efforts.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

future of ai technology essay

future of ai technology essay

Space exploration

Mind-bending speed is the only way to reach the stars – here are three ways to do it

future of ai technology essay

Film and visual culture

An augmented-reality filter reveals the hidden movements all around us

The International Space Station is seen at an angle through a window against the darkness of space

The skyhook solution

Space junk surrounds Earth, posing a dangerous threat. But there is a way to turn the debris into opportunity

Angelos Alfatzis

future of ai technology essay

Computing and artificial intelligence

A scientist’s poor eyesight helped fuel a revolution in computer ‘vision’

future of ai technology essay

Future of technology

Is this the future of space travel? Take a luxury ‘cruise’ across the solar system

Human figures beneath a glass dome watch the launch of a rocket from the surface of Mars

The final ethical frontier

Earthbound exploration was plagued with colonialism, exploitation and extraction. Can we hope to make space any different?

Philip Ball

future of ai technology essay

Artificial ‘creativity’ is unstoppable. Grappling with its ethics is up to us

future of ai technology essay

Give the drummer some

As AI drum machines embrace humanising imperfections, what does this mean for ‘real’ drummers and the soul of music?

Jack Stilgoe

future of ai technology essay

With human help, AIs are generating a new aesthetics. The results are trippy

future of ai technology essay

What does an AI make of what it sees in a contemporary art museum?

future of ai technology essay

Ecology and environmental sciences

Producing food while restoring the planet – a glimpse of farming in the future

future of ai technology essay

The environment

The power of shit

Our excrement is a natural, renewable and sustainable resource – if only we can overcome our visceral disgust of it

Lina Zeldovich

future of ai technology essay

Technology and the self

When an AI rejects him for life insurance, Mitch wonders if he can escape his fate

future of ai technology essay

Learn from machine learning

The world is a black box full of extreme specificity: it might be predictable but that doesn’t mean it is understandable

David Weinberger

future of ai technology essay

Stories and literature

The cliché writes back

Machine-written literature might offend your tastes but until the dawn of Romanticism most writers were just as formulaic

Yohei Igarashi

future of ai technology essay

Algorithms are sensitive. People are specific. We should exploit their respective strengths

future of ai technology essay

Tech companies shroud their algorithms in secrecy. It’s time to pry open the black box

future of ai technology essay

How vulnerable is the world?

Sooner or later a technology capable of wiping out human civilisation might be invented. How far would we go to stop it?

Nick Bostrom & Matthew van der Merwe

future of ai technology essay

Zoom and gloom

Sitting in a videoconference is a uniformly crap experience. Instead of corroding our humanity, let’s design tools to enhance it

Robert O’Toole

future of ai technology essay

A handful of executives control the ‘attention economy’. Time for attentive resistance

future of ai technology essay

Where did the grandeur go?

Superlative things were done in the past century by marshalling thousands of people in the service of a vision of the future

Martin Parker

future of ai technology essay

Ceramic coral reefs and sawdust houses – the architects 3D-printing the future from scratch

future of ai technology essay

Algorithms associating appearance and criminality have a dark past

Catherine Stinson

future of ai technology essay

Engines of life

At the level of the tiny, biology is all about engineering. That’s why nanotechnology can rebuild medicine from within

Sonia Contera

From the world wide web to AI: 11 technology milestones that changed our lives

Laptop half-open.

The world wide web is a key technological milestone in the past 40 years. Image:  Unsplash/Ales Nesetril

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Stephen Holroyd

future of ai technology essay

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, emerging technologies.

  • It’s been 40 years since the launch of the Apple Macintosh personal computer.
  • Since then, technological innovation has accelerated – here are some of the most notable tech milestones over the past four decades.
  • The World Economic Forum’s EDISON Alliance aims to digitally connect 1 billion people to essential services like healthcare, education and finance by 2025.

On 24 January 1984, Apple unveiled the Macintosh 128K and changed the face of personal computers forever.

Steve Jobs’ compact, user-friendly computer introduced the graphical user interface to the world, marking a pivotal moment in the evolution of personal technology.

Since that day, the rate of technological innovation has exploded, with developments in computing, communication, connectivity and machine learning expanding at an astonishing rate.

Here are some of the key technological milestones that have changed our lives over the past 40 years.

Have you read?

9 ways ai is helping tackle climate change, driving trust: paving the road for autonomous vehicles, these are the top 10 emerging technologies of 2023: here's how they can impact the world, 1993: the world wide web.

Although the internet’s official birthday is often debated, it was the invention of the world wide web that drove the democratization of information access and shaped the modern internet we use today.

Created by British scientist Tim Berners-Lee, the World Wide Web was launched to the public in 1993 and brought with it the dawn of online communication, e-commerce and the beginning of the digital economy.

Despite the enormous progress since its invention, 2.6 billion people still lack internet access and global digital inclusion is considered a priority. The World Economic Forum’s EDISON Alliance aims to bridge this gap and digitally connect 1 billion people to essential services like healthcare, education and finance by 2025.

1997: Wi-Fi

The emergence of publicly available Wi-Fi in 1997 changed the face of internet access – removing the need to tether to a network via a cable. Without Wi-Fi, the smartphone and the ever-present internet connection we’ve come to rely on, wouldn’t have been possible, and it has become an indispensable part of our modern, connected world.

1998: Google

The launch of Google’s search engine in 1998 marked the beginning of efficient web search, transforming how people across the globe accessed and navigated online information . Today, there are many others to choose from – Bing, Yahoo!, Baidu – but Google remains the world’s most-used search engine.

2004: Social media

Over the past two decades, the rise of social media and social networking has dominated our connected lives. In 2004, MySpace became the first social media site to reach one million monthly active users. Since then, platforms like Facebook, Instagram and TikTok have reshaped communication and social interaction , nurturing global connectivity and information sharing on an enormous scale, albeit not without controversy .

Most popular social networks worldwide as of January 2024, ranked by number of monthly active users

2007: The iPhone

More than a decade after the first smartphone had been introduced, the iPhone redefined mobile technology by combining a phone, music player, camera and internet communicator in one sleek device. It set new standards for smartphones and ultimately accelerated the explosion of smartphone usage we see across the planet today.

2009: Bitcoin

The foundations for modern digital payments were laid in the late 1950s with the introduction of the first credit and debit cards, but it was the invention of Bitcoin in 2009 that set the stage for a new era of secure digital transactions. The first decentralized cryptocurrency, Bitcoin introduced a new form of digital payment system that operates independently of traditional banking systems. Its underlying technology, blockchain, revolutionized the concept of digital transactions by providing a secure, transparent, and decentralized method for peer-to-peer payments. Bitcoin has not only influenced the development of other cryptocurrencies but has also sparked discussions about the future of money in the digital age.

2014: Virtual reality

2014 was a pivotal year in the development of virtual reality (VR) for commercial applications. Facebook acquired the Oculus VR company for $2 billion and kickstarted a drive for high-quality VR experiences to be made accessible to consumers. Samsung and Sony also announced VR products, and Google released the now discontinued Cardboard – a low-cost, do-it-yourself viewer for smartphones. The first batch of Oculus Rift headsets began shipping to consumers in 2016.

2015: Autonomous vehicles

Autonomous vehicles have gone from science fiction to science fact in the past two decades, and predictions suggest that almost two-thirds of registered passenger cars worldwide will feature partly-assisted driving and steering by 2025 . In 2015, the introduction of Tesla’s Autopilot brought autonomous features to consumer vehicles, contributing to the mainstream adoption of self-driving technology.

Cars Increasingly Ready for Autonomous Driving

2019: Quantum computing

A significant moment in the history of quantum computing was achieved in October 2019 when Google’s Sycamore processor demonstrated “quantum supremacy” by solving a complex problem faster than the world’s most powerful supercomputers. Quantum technologies can be used in a variety of applications and offer transformative impacts across industries. The World Economic Forum’s Quantum Economy Blueprint provides a framework for value-led, democratic access to quantum resources to help ensure an equitable global distribution and avoid a quantum divide.

2020: The COVID-19 pandemic

The COVID-19 pandemic accelerated digital transformation on an unprecedented scale . With almost every aspect of human life impacted by the spread of the virus – from communicating with loved ones to how and where we work – the rate of innovation and uptake of technology across the globe emphasized the importance of remote work, video conferencing, telemedicine and e-commerce in our daily lives.

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forum’s Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance .

The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

2022: Artificial intelligence

Artificial intelligence (AI) technology has been around for some time and AI-powered consumer electronics, from smart home devices to personalized assistants, have become commonplace. However, the emergence of mainstream applications of generative AI has dominated the sector in recent years.

In 2022, OpenAI unveiled its chatbot, ChatGPT. Within a week, it had gained over one million users and become the fastest-growing consumer app in history . In the same year, DALL-E 2, a text-to-image generative AI tool, also launched.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Emerging Technologies .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

future of ai technology essay

How venture capital is investing in AI in the top five global economies — and shaping the AI ecosystem

Piyush Gupta, Chirag Chopra and Ankit Kasare

May 24, 2024

future of ai technology essay

What can we expect of next-generation generative AI models?

Andrea Willige

May 22, 2024

future of ai technology essay

Solar storms hit tech equipment, and other technology news you need to know

Sebastian Buckup

May 17, 2024

future of ai technology essay

Generative AI is trained on just a few of the world’s 7,000 languages. Here’s why that’s a problem – and what’s being done about it

Madeleine North

future of ai technology essay

Critical minerals demand has doubled in the past five years – here are some solutions to the supply crunch

Emma Charlton

May 16, 2024

future of ai technology essay

6 ways satellites are helping to monitor our changing planet from space

Advertisement

Supported by

OpenAI Unveils New ChatGPT That Listens, Looks and Talks

Chatbots, image generators and voice assistants are gradually merging into a single technology with a conversational voice.

  • Share full article

A photo of a large cement building with expansive glass windows.

By Cade Metz

Reporting from San Francisco

As Apple and Google transform their voice assistants into chatbots, OpenAI is transforming its chatbot into a voice assistant.

On Monday, the San Francisco artificial intelligence start-up unveiled a new version of its ChatGPT chatbot that can receive and respond to voice commands, images and videos.

The company said the new app — based on an A.I. system called GPT-4o — juggles audio, images and video significantly faster than previous versions of the technology. The app will be available starting on Monday, free of charge, for both smartphones and desktop computers.

“We are looking at the future of the interaction between ourselves and machines,” said Mira Murati, the company’s chief technology officer.

The new app is part of a wider effort to combine conversational chatbots like ChatGPT with voice assistants like the Google Assistant and Apple’s Siri. As Google merges its Gemini chatbot with the Google Assistant, Apple is preparing a new version of Siri that is more conversational.

OpenAI said it would gradually share the technology with users “over the coming weeks.” This is the first time it has offered ChatGPT as a desktop application.

The company previously offered similar technologies from inside various free and paid products. Now, it has rolled them into a single system that is available across all its products.

During an event streamed on the internet, Ms. Murati and her colleagues showed off the new app as it responded to conversational voice commands, used a live video feed to analyze math problems written on a sheet of paper and read aloud playful stories that it had written on the fly.

The new app cannot generate video. But it can generate still images that represent frames of a video.

With the debut of ChatGPT in late 2022 , OpenAI showed that machines can handle requests more like people. In response to conversational text prompts, it could answer questions, write term papers and even generate computer code.

ChatGPT was not driven by a set of rules. It learned its skills by analyzing enormous amounts of text culled from across the internet, including Wikipedia articles, books and chat logs. Experts hailed the technology as a possible alterative to search engines like Google and voice assistants like Siri.

Newer versions of the technology have also learned from sounds, images and video. Researchers call this “multimodal A.I.” Essentially, companies like OpenAI began to combine chatbots with A.I. image , audio and video generators.

(The New York Times sued OpenAI and its partner, Microsoft, in December, claiming copyright infringement of news content related to A.I. systems.)

As companies combine chatbots with voice assistants, many hurdles remain. Because chatbots learn their skills from internet data, they are prone to mistakes. Sometimes, they make up information entirely — a phenomenon that A.I. researchers call “ hallucination .” Those flaws are migrating into voice assistants.

While chatbots can generate convincing language, they are less adept at taking actions like scheduling a meeting or booking a plane flight. But companies like OpenAI are working to transform them into “ A.I. agents ” that can reliably handle such tasks.

OpenAI previously offered a version of ChatGPT that could accept voice commands and respond with voice. But it was a patchwork of three different A.I. technologies: one that converted voice to text, one that generated a text response and one that converted this text into a synthetic voice.

The new app is based on a single A.I. technology — GPT-4o — that can accept and generate text, sounds and images. This means that the technology is more efficient, and the company can afford to offer it to users for free, Ms. Murati said.

“Before, you had all this latency that was the result of three models working together,” Ms. Murati said in an interview with The Times. “You want to have the experience we’re having — where we can have this very natural dialogue.”

An earlier version of this article misstated the day when OpenAI introduced its new version of ChatGPT. It was Monday, not Tuesday.

How we handle corrections

Cade Metz writes about artificial intelligence, driverless cars, robotics, virtual reality and other emerging areas of technology. More about Cade Metz

Explore Our Coverage of Artificial Intelligence

News  and Analysis

News Corp, the Murdoch-owned empire of publications like The Wall Street Journal and The New York Post, announced that it had agreed to a deal with OpenAI to share its content  to train and service A.I. chatbots.

The Silicon Valley company Nvidia was again lifted by sales of its A.I. chips , but it faces growing competition and heightened expectations.

Researchers at the A.I. company Anthropic claim to have found clues about the inner workings  of large language models, possibly helping to prevent their misuse and to curb their potential threats.

The Age of A.I.

D’Youville University in Buffalo had an A.I. robot speak at its commencement . Not everyone was happy about it.

A new program, backed by Cornell Tech, M.I.T. and U.C.L.A., helps prepare lower-income, Latina and Black female computing majors  for A.I. careers.

Publishers have long worried that A.I.-generated answers on Google would drive readers away from their sites. They’re about to find out if those fears are warranted, our tech columnist writes .

A new category of apps promises to relieve parents of drudgery, with an assist from A.I.  But a family’s grunt work is more human, and valuable, than it seems.

  • IEEE CS Standards
  • Career Center
  • Subscribe to Newsletter
  • IEEE Standards

future of ai technology essay

  • For Industry Professionals
  • For Students
  • Launch a New Career
  • Membership FAQ
  • Membership FAQs
  • Membership Grades
  • Special Circumstances
  • Discounts & Payments
  • Distinguished Contributor Recognition
  • Grant Programs
  • Find a Local Chapter
  • Find a Distinguished Visitor
  • Find a Speaker on Early Career Topics
  • Technical Communities
  • Collabratec (Discussion Forum)
  • Start a Chapter
  • My Subscriptions
  • My Referrals
  • Computer Magazine
  • ComputingEdge Magazine
  • Let us help make your event a success. EXPLORE PLANNING SERVICES
  • Events Calendar
  • Calls for Papers
  • Conference Proceedings
  • Conference Highlights
  • Top 2024 Conferences
  • Conference Sponsorship Options
  • Conference Planning Services
  • Conference Organizer Resources
  • Virtual Conference Guide
  • Get a Quote
  • CPS Dashboard
  • CPS Author FAQ
  • CPS Organizer FAQ
  • Find the latest in advanced computing research. VISIT THE DIGITAL LIBRARY
  • Open Access
  • Tech News Blog
  • Author Guidelines
  • Reviewer Information
  • Guest Editor Information
  • Editor Information
  • Editor-in-Chief Information
  • Volunteer Opportunities
  • Video Library
  • Member Benefits
  • Institutional Library Subscriptions
  • Advertising and Sponsorship
  • Code of Ethics
  • Educational Webinars
  • Online Education
  • Certifications
  • Industry Webinars & Whitepapers
  • Research Reports
  • Bodies of Knowledge
  • CS for Industry Professionals
  • Resource Library
  • Newsletters
  • Women in Computing
  • Digital Library Access
  • Organize a Conference
  • Run a Publication
  • Become a Distinguished Speaker
  • Participate in Standards Activities
  • Peer Review Content
  • Author Resources
  • Publish Open Access
  • Society Leadership
  • Boards & Committees
  • Local Chapters
  • Governance Resources
  • Conference Publishing Services
  • Chapter Resources
  • About the Board of Governors
  • Board of Governors Members
  • Diversity & Inclusion
  • Open Volunteer Opportunities
  • Award Recipients
  • Student Scholarships & Awards
  • Nominate an Election Candidate
  • Nominate a Colleague
  • Corporate Partnerships
  • Conference Sponsorships & Exhibits
  • Advertising
  • Recruitment
  • Publications
  • Education & Career

CVPR Technical Program Features Presentations on the Latest AI and Computer Vision Research

future of ai technology essay

LOS ALAMITOS, Calif., 16 May 2024 – Co-sponsored by the IEEE Computer Society (CS) and the Computer Vision Foundation (CVF), the 2024 Computer Vision and Pattern Recognition (CVPR) Conference is the preeminent event for research and development (R&D) in the hot topic areas of computer vision, artificial intelligence (AI), machine learning (ML), augmented, virtual and mixed reality (AR/VR/MR), deep learning, and related fields. Over the past decade, these areas have seen significant growth, and the emphasis on this sector by the science and engineering community has fueled an increasingly competitive technical program.

This year, the CVPR Program Committee received 11,532 paper submissions—a 26% increase over 2023—but only 2,719 were accepted, resulting in an acceptance rate of just 23.6%. Of those accepted papers, only 3.3% were slotted for oral presentations based on nominations from the area chairs and senior area chairs overseeing the program.

“CVPR is not only the premiere conference in computer vision, but it’s also among the highest-impact publication venues in all of science,” said David Crandall, Professor of Computer Science at Indiana University, Bloomington, Ind., U.S.A., and CVPR 2024 Program Co-Chair. “Having one’s paper accepted to CVPR is already a major achievement, and then having it selected as an oral presentation is a very rare honor that reflects its high quality and potential impact.”

Taking place 17-21 June at the Seattle Convention Center in Seattle, Wash., U.S.A., CVPR offers oral presentations that speak to both fundamental and applied research in areas as diverse as healthcare applications, robotics, consumer electronics, autonomous vehicles, and more. Examples include:

  • Pathology: Transcriptomics-guided Slide Representation Learning in Computational Pathology *– Training computer systems for pathology requires a multi-modal approach for efficiency and accuracy. New work from a multi-disciplinary team at Harvard University (Cambridge, Mass., U.S.A.), the Massachusetts Institute of Technology (MIT; Cambridge, Mass., U.S.A.), Emory University (Atlanta, Ga., U.S.A.) and others employs modality-specific encoders, and when applied on liver, breast, and lung samples from two different species, they demonstrated significantly better performance when compared to current baselines.
  • Robotics: SceneFun3D: Fine-Grained Functionality and Affordance Understanding in 3D Scenes – Creating realistic interactions in 3D scenes has been troublesome from a technology perspective because it has been difficult to manipulate objects in the scene context. Research from ETH Zürich (Zürich, Switzerland), Google (Mountainview, Calif., U.S.A.), Technical University of Munich (TUM; Munich, Germany), and Microsoft (Redmond, Wash., U.S.A.) has begun bridging that divide by creating a large-scale dataset with more than 14.8k highly accurate interaction annotations for 710 high-resolution real-world 3D indoor scenes. This work, as the paper concludes, has the potential to “stimulate advancements in embodied AI, robotics, and realistic human-scene interaction modeling.”
  • Virtual Reality: URHand: Universal Relightable Hands – Teams from Codec Avatars Lab at Meta (Menlo Park, Calif., U.S.A.) and Nanyang Technological University (Singapore) unveil a hand model that generalizes to novel viewpoints, poses, identities, and illuminations, which enables quick personalization from a phone scan. The resulting images make for a more realistic experience of reaching, grabbing, and interacting in a virtual environment.
  • Human Avatars: Semantic Human Mesh Reconstruction with Textures – Working to create realistic human models, teams at Nanjing University (Nanjing, China) and Texas A&M University (College Station, Texas, U.S.A.) designed a method of 3-D human mesh reconstruction that is capable of producing high-fidelity and robust semantic renderings that outperform state-of-the-art methods. The paper concludes, “This approach bridges existing monocular reconstruction work and downstream industrial applications, and we believe it can promote the development of human avatars.”
  • Text-to-Image Systems: Ranni: Taming Text-to-Image Diffusion for Accurate Instruction – Existing text-to-image models can misinterpret more difficult prompts, but now, new research from Alibaba Group (Hangzhou, Zhejiang, China) and Ant Group (Hangzhou, Zhejiang, China) has made strides in addressing that issue via a middleware layer. This approach, which they have dubbed Ranni, supports the text-to-image generator in better following instructions. As the paper sums up, “Ranni shows potential as a flexible chat-based image creation system, where any existing diffusion model can be incorporated as the generator for interactive generation.”
  • Autonomous Driving: Producing and Leveraging Online Map Uncertainty in Trajectory Prediction – To enable autonomous driving, vehicles must be pre-trained on the geographic region and potential pitfalls. High-definition (HD) maps have become a standard part of a vehicle’s technology stack, but current approaches to those maps are siloed in their programming. Now, work from a research team from the University of Toronto (Toronto, Ontario, Canada), Vector Institute (Toronto, Ontario, Canada), NVIDIA Research (Santa Clara, Calif., U.S.A.), and Stanford University (Palo Alto, Calif., U.S.A.) enhances current methodologies by incorporating uncertainty, resulting in up to 50% faster training convergence and up to 15% better prediction performance.

“As the field’s leading event, CVPR introduces the latest research in all areas of computer vision,” said Crandall. “In addition to the oral paper presentations, there will be thousands of posters, dozens of workshops and tutorials, several keynotes and panels, and countless opportunities for learning and networking. You really have to attend the conference to get the full scope of what’s next for computer vision and AI technology.”

Digital copies of all final technical papers* will be available on the conference website by the week of 10 June to allow attendees to prepare their schedules. To register for CVPR 2024 as a member of the press and/or request more on a specific paper, visit https://cvpr.thecvf.com/Conferences/2024/MediaPass or email [email protected]. For more information on the conference, visit https://cvpr.thecvf.com/ .

*Papers linked in this press release refer to pre-print publications. Final, citable papers will be available just prior to the conference.

About the CVPR 2024 The Computer Vision and Pattern Recognition Conference (CVPR) is the preeminent computer vision event for new research in support of artificial intelligence (AI), machine learning (ML), augmented, virtual and mixed reality (AR/VR/MR), deep learning, and much more. Sponsored by the IEEE Computer Society (CS) and the Computer Vision Foundation (CVF), CVPR delivers the important advances in all areas of computer vision and pattern recognition and the various fields and industries they impact. With a first-in-class technical program, including tutorials and workshops, a leading-edge expo, and robust networking opportunities, CVPR, which is annually attended by more than 10,000 scientists and engineers, creates a one-of-a-kind opportunity for networking, recruiting, inspiration, and motivation.

CVPR 2024 takes place 17-21 June at the Seattle Convention Center in Seattle, Wash., U.S.A., and participants may also access sessions virtually. For more information about CVPR 2024, visit cvpr.thecvf.com .

About the Computer Vision Foundation The Computer Vision Foundation (CVF) is a non-profit organization whose purpose is to foster and support research on all aspects of computer vision. Together with the IEEE Computer Society, it co-sponsors the two largest computer vision conferences, CVPR and the International Conference on Computer Vision (ICCV). Visit thecvf.com for more information.

About the IEEE Computer Society Engaging computer engineers, scientists, academia, and industry professionals from all areas and levels of computing, the IEEE Computer Society (CS) serves as the world’s largest and most established professional organization of its type. IEEE CS sets the standard for the education and engagement that fuels continued global technological advancement. Through conferences, publications, and programs that inspire dialogue, debate, and collaboration, IEEE CS empowers, shapes, and guides the future of not only its 375,000+ community members, but the greater industry, enabling new opportunities to better serve our world. Visit computer.org for more information.

Recommended by IEEE Computer Society

future of ai technology essay

The IEEE International Roadmap for Devices and Systems (IRDS) Emerges as a Global Leader for Chips Acts Visions and Programs

future of ai technology essay

IEEE Computer Society Announces 2024 Class of Fellow

future of ai technology essay

IEEE CS Releases 20 in their 20s List, Identifying Emerging Leaders in Computer Science and Engineering

future of ai technology essay

IEEE CS Authors, Speakers, and Leaders Named to Inaugural TIME100 Most Influential People in Artificial Intelligence List

future of ai technology essay

IEEE SustainTech Leadership Forum 2024: Unlocking the Future of Sustainable Technology for Buildings and Factories in the Built Environment

future of ai technology essay

J. Gregory Pauloski and Rohan Basu Roy Named Recipients of 2023 ACM/IEEE CS George Michael Memorial HPC Fellowships

future of ai technology essay

Keshav Pingali Selected to Receive ACM-IEEE CS Ken Kennedy Award

future of ai technology essay

Hironori Washizaki Elected IEEE Computer Society 2025 President

IMAGES

  1. ≫ Present and Future of Artificial Intelligence Free Essay Sample on

    future of ai technology essay

  2. 2023 emerging AI and Machine Learning trends

    future of ai technology essay

  3. Artificial Intelligence Essay 300 Words

    future of ai technology essay

  4. The Future of AI How Artificial Intelligence is Changing the World

    future of ai technology essay

  5. Artificial Intelligence In Modern World

    future of ai technology essay

  6. Artificial Intelligence, Are the Machines Taking over Free Essay Example

    future of ai technology essay

VIDEO

  1. Google AI Gemini Explained

  2. Create Text to Images #ai #youtubeshort #shorts #2024

  3. [DEF2023] Session1_Invitation to the future: AI technology (미래로의 초대 : AI 기술)

  4. Future AI Technology #shout #funny #joker #trend #viral #avatar #edit #ai #artificialintelligence

  5. 🤖 AI

  6. Have you heard about Artificial Intelligence?

COMMENTS

  1. The Future of AI: How AI Is Changing the World

    Innovations in the field of artificial intelligence continue to shape the future of humanity across nearly every industry. AI is already the main driver of emerging technologies like big data, robotics and IoT, and generative AI has further expanded the possibilities and popularity of AI. According to a 2023 IBM survey, 42 percent of enterprise-scale businesses integrated AI into their ...

  2. The present and future of AI

    When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems. In the near future, two applications that I'm ...

  3. What's the future of AI?

    March 6, 2024 -. Tokenization is the process of creating a digital representation of a real thing. Tokenization can be used to protect sensitive data or to efficiently process large amounts of data. AI is here to stay. To outcompete in the future, organizations and individuals alike need to get familiar fast.

  4. What's next for AI in 2024

    In 2024, generative AI might actually become useful for the regular, non-tech person, and we are going to see more people tinkering with a million little AI models. State-of-the-art AI models ...

  5. The AI Anthology: 20 Essays You Should Read About Our Future With AI

    Four essays will be published to the AI Anthology each week, with the complete collection available on June 26, 2023. Here are the first four essays: A Thinking Evolution by Alec Gallimore, a rocket scientist and Dean of Engineering at the University of Michigan, gets curious about the odyssey of AI. Eradicating Inequality by Gillian Hadfield ...

  6. The Future of AI: What Comes Next and What to Expect

    This is one look at the future of chatbots and other A.I. technologies: A new wave of multimodal systems will juggle images, sounds and videos as well as text. Yesterday, my colleague Kevin Roose ...

  7. How artificial intelligence is transforming the world

    April 24, 2018. Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision ...

  8. The future of AI's impact on society

    For decades, AI has aroused both fear and excitement as humanity contemplates creating machines in our image. Having said this, academics, technologists, and the general public have raised a ...

  9. What Is the Future of AI?

    He's also, along with Kartik, the co-Director of our Center on AI at Wharton. And his research examines how artificial intelligence and automation are changing consumption and society. And ...

  10. Six researchers who are shaping the future of artificial intelligence

    Gemma Conroy, Hepeng Jia, Benjamin Plackett &. Andy Tay. As artificial intelligence (AI) becomes ubiquitous in fields such as medicine, education and security, there are significant ethical and ...

  11. What's the future of AI?

    Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. To outcompete in the future, organizations and individuals alike need to get familiar fast. We don't know exactly what the future will look like.

  12. Artificial intelligence is transforming our world

    When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI's capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on ...

  13. How will AI change the world? 5 deep dives into the technology's future

    AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human.

  14. Artificial Intelligence: History, Challenges, and Future Essay

    In the editorial "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence" by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired ...

  15. AI: the future of humanity

    Artificial intelligence (AI) is reshaping humanity's future, and this manuscript provides a comprehensive exploration of its implications, applications, challenges, and opportunities. The revolutionary potential of AI is investigated across numerous sectors, with a focus on addressing global concerns. The influence of AI on areas such as healthcare, transportation, banking, and education is ...

  16. Artificial Intelligence and the Future of Humans

    Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018. The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.

  17. 500+ Words Essay on Artificial Intelligence

    Artificial Intelligence Essay. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use ...

  18. The role of Artificial Intelligence in future technology

    at our disposal, AI is going to add a new level of ef ficiency and. sophistication to future technologies. One of the primary goals of AI field is to produce fully au-. tonomous intelligent ...

  19. The Future of Artificial Intelligence: Predictions and Challenges

    1. AI will become increasingly ingrained in our daily lives, according to. predictions. As AI technology advances and gets more sophisticated, it. will permeate more aspects of our life. This ...

  20. PDF Artificial Intelligence and the Future of Teaching and Learning

    ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together. In late 2022 and early 2023, the public became aware of new generative AI chatbots and began to explore how AI could be used to write essays, create lesson

  21. Essay on Future of Artificial Intelligence

    500 Words Essay on Future of Artificial Intelligence Introduction. Artificial Intelligence (AI) has transformed from a fringe scientific concept into a commonplace technology, permeating every aspect of our lives. As we stand on the precipice of the future, it becomes crucial to understand AI's potential trajectory and the profound ...

  22. How Artificial Intelligence Will Change the Future? Essay

    Examples of artificial intelligence in our day to day lives include: Siri, Google, drones, advertising and so much more. Every industry is incorporating artificial intelligence into its back bone in one form or the other, changing the way we think and live. The future of artificial intelligence is both fascinating and a constant cause of ...

  23. Society

    Engines of life. At the level of the tiny, biology is all about engineering. That's why nanotechnology can rebuild medicine from within. Society Essays from Aeon. World-leading thinkers explore big ideas from history, politics, economics, sociology, philosophy, archaeology and anthropology, and more.

  24. How AI and other technology changed our lives

    The World Economic Forum's EDISON Alliance aims to digitally connect 1 billion people to essential services like healthcare, education and finance by 2025. On 24 January 1984, Apple unveiled the Macintosh 128K and changed the face of personal computers forever. Steve Jobs' compact, user-friendly computer introduced the graphical user ...

  25. Can Artificial Intelligence Make the PC Cool Again?

    The race to put artificial intelligence everywhere is taking a detour through the good old laptop computer. Microsoft on Monday introduced a new kind of computer designed for artificial ...

  26. OpenAI Unveils New ChatGPT That Listens, Looks and Talks

    On Monday, the San Francisco artificial intelligence start-up unveiled a new version of its ChatGPT chatbot that can receive and respond to voice commands, images and videos. The company said the ...

  27. OpenAI unveils newest AI model, GPT-4o

    OpenAI on Monday announced its latest artificial intelligence large language model that it says will make ChatGPT smarter and easier to use. The new model, called GPT-4o, is an update from the ...

  28. How AI Can Transform The IT Service Industry In The Next 5 Years

    Predictions For The Future Of IT Service Industry Transformation. The IT industry is expected to undergo a significant transformation. Starting with a period of disruption in 2024, the industry is ...

  29. CVPR 2024: Latest AI & Computer Vision Research

    LOS ALAMITOS, Calif., 16 May 2024 - Co-sponsored by the IEEE Computer Society (CS) and the Computer Vision Foundation (CVF), the 2024 Computer Vision and Pattern Recognition (CVPR) Conference is the preeminent event for research and development (R&D) in the hot topic areas of computer vision, artificial intelligence (AI), machine learning (ML), augmented, virtual and mixed reality (AR/VR/MR ...

  30. Essay- gpt (docx)

    The future of GPT will be driven by continuous research and innovation. Ongoing efforts to improve model architecture, training techniques, and data quality will be key in advancing the capabilities of GPT. Additionally, exploring new applications and use cases will help unlock the full potential of this technology. Investment in AI research and development will be critical in maintaining ...