Artificial intelligence had something of a coming out party in 2017. While some intense sensationalization of AI persisted throughout the year, progress was made to help people understand the realities of AI-driven technologies and applications that guide several aspects of their personal digital lives. The inner workings of AI technologies and how they make decisions, however, remained a black box for many. Here’s how I see AI — and its journey into the mainstream — evolving in 2018.

The desire to create humanlike AI will fade

Sorry, Sophia the Robot, but the AI industry will begin moving away from developing technologies that reside in humanlike physical structures. This industry shift will become more prevalent as AI’s integration increases in the platforms and technologies people use to manage personal finances, locate public records, rate customer experiences, and learn new things. AI engineers and developers will move toward building algorithm-driven AI that responds, makes decisions, and interacts with people in a human way. To me, this is one of the most promising shifts AI will experience in 2018 because making AI look more human distracts from actual progress in making it act human.

There will be more focus on consumer adoption of AI

The AI industry will work to build trust with people who buy and subscribe to their AI-driven products and services. In practice, this means proactively communicating software updates and potential risks to workplace and personal voice assistants, chatbots, and platforms running on AI — and doing so in a simple way that regular people can comprehend. The ultimate goal of designing AI for the masses will be to make consumers more comfortable with AI. This will help to address widespread ethical and technical concerns, as well as boost AI adoption.

The regulatory landscape of AI will move forward

Industry players will begin to shed light on how they self-regulate enterprise applications of AI technologies as governments in the United Kingdom, the United States, the European Union, and elsewhere in the world make attempts to learn about the technology’s core values, risks, and practical future. This self-regulation will expand beyond AI to address business and public concerns about data privacy and protection. Accountability will remain a central issue, and pressure on industry to explain how companies use data — especially consumer information — to build and inform AI applications will increase throughout 2018.

AI development will be accessible to wider ranges of people

As recently as a few years ago, people needed advanced degrees in data science and engineering to build AI technologies, work with algorithms, and develop software. Today, developer tools, training programs, and accessible career opportunities exist to bring non-technical people into the AI fold. Companies like LinkedIn already have infrastructure in place to train engineers in AI development. In 2018, we will see an expansion of these tools, resources, and education opportunities to other staff. Technical experts and creative professionals will join forces to advance AI’s real-world applications. We will begin to see people without deep technical backgrounds taking positions on the front lines of building AI’s future in finance, technology, transportation, health care, and other vital industries.

People will learn to become coworkers with AI

Every new report on AI and jobs sparks public outcry and highlights the need for deeper understanding into how advancements in AI will affect real jobs, talent, and workplaces. While it is true that some positions will be replaced by AI technologies, many will evolve to incorporate — and coexist with — AI in a manner that optimizes benefit to a company. Over the next year, companies will begin to develop retraining programs to inform non-technical employees how to work with AI in pursuit of better customer service, boosted productivity, and improved task accuracy.

Cybersecurity will use AI to counter sophisticated threats

While Hollywood’s idea that robots could take over the world hits on potential threats tied to hackable technology, engineers will focus on addressing these issues at the non-physical, data, and algorithm levels with AI. Currently, the ability of hackers to breach technologies outpaces the cybersecurity industry’s ability to protect vulnerable technologies. To catch up, tech industry leaders like Google, Facebook, and Amazon will look for more opportunities to team up with smaller startups and academic researchers at MIT, NYU, and other leading institutions to produce airtight, AI-driven security. In practice, these partnerships will help to build bulletproof AI systems that can be deployed across networks and platforms to monitor, discover, and prevent hacks.

The AI industry will address more complex problems

AI encompasses a network of complex and crucial technologies. Many of AI’s current enterprise and consumer applications address small and targeted problems. A smart assistant might be able to guide you to file expenses correctly. A search algorithm might direct you to the best plumber in Ontario. A voice assistant might uncover a musical world you never knew existed. Meanwhile, AI technologies exist today that could address far more intricate issues in business and regular life, from managing an entire workforce to tackling climate change. Over the next year, I see companies across industries beginning to externally deploy these AI solutions to address bigger, more complex, and more public problems.

New year, new AI opportunities

Ultimately, I believe the AI industry will make great progress in uncovering the nuances and intricacies of AI technologies in the coming year. AI’s applications will continue to multiply and diversify. The responsibility for boosting visibility and public awareness around these applications will fall squarely on the AI industry. There will be stronger research and development connections within the AI industry, and between the private, public, and academic sectors.

A global conversation around the importance of developing ethical, unbiased, and responsible AI took off in 2017. That conversation will turn into action in 2018. Industry leaders will focus on innovating AI technologies to solve bigger industry and societal problems, democratizing AI development tools, unveiling self-regulation approaches, and communicating AI’s true value to the general public. 2018 will be an epic year for AI.

Kriti Sharma is the vice president of AI at Sage, a global integrated accounting, payroll, and payment systems provider. She is also the creator of Pegg, the world’s first AI accounting smart assistant, with users in 135 countries.