8 Tendencias en IA para el 2020

Artificial Intelligence Newsletter de Oreilly​
Esta Email que recibo regularmente de OReilly y que expresamiente permiten compartir da un Vistazo en lo que está pasando en la IA. Ojalá sea de interés para Ustedes.



8 AI trends we’re watching in 2020
Roger Magoulas

O’Reilly’s Roger Magoulas takes a look at the new developments in automation, hardware, tools, model development, and more that will shape (or accelerate) AI in 2020.
1. Signs point toward an acceleration of AI adoption.
We see the AI space poised for an acceleration in adoption, driven by more sophisticated AI models being put in production, specialized hardware that increases AI’s capacity to provide quicker results based on larger datasets, simplified tools, small tools that enable AI on nearly any device, and cloud access that allows access to AI resources from anywhere.
Integrating data from many sources, complex business and logic challenges, and competitive incentives to make data more useful all combine to elevate AI and automation technologies from optional to required. And AI processes have unique capabilities that can address an increasingly diverse array of automation tasks—tasks that defy what traditional procedural logic and programming can handle, for example, image recognition, summarization, labeling, complex monitoring, and response.
In fact, in our 2019 surveys over
webicon_green.png
half of the respondents say AI (deep learning, specifically) will be part of their future projects and products—and a
webicon_green.png
majority of companies are starting to adopt machine learning.
2. The line between data and AI is blurring.
Access to the amount of data necessary for AI, proven use cases for both consumer and enterprise AI, and more accessible tools for building applications have grown dramatically, spurring new AI projects and pilots.
To stay competitive, data scientists need to at least dabble in machine and deep learning. At the same time, current AI systems rely on data-hungry models, so AI experts will require high-quality data and a secure and efficient data pipeline. As these disciplines merge, data professionals will need a basic understanding of AI, and AI experts will need a foundation in solid data practices, and likely, a more formal commitment to data governance.
That’s why we decided to combine the
webicon_green.png
2020 O’Reilly AI and Strata Data Conferences in San Jose, London, and New York.
3. New (and simpler) tools, infrastructures, and hardware are being developed.
We’re in a highly empirical era for machine learning. Tools for machine learning development need to account for the growing importance of data, experimentation, model search, model deployment, and monitoring. At the same time, managing the various stages of AI development is getting easier with the growing ecosystem of open source frameworks and libraries, cloud platforms, proprietary software tools, and SaaS.
4. New models and methods are emerging.
While deep learning continues to drive a lot of interesting research, most end-to-end solutions are hybrid systems. In 2020 we’ll hear more about the essential role of other components and methods—including Bayesian and other model-based methods, tree search, evolution, knowledge graphs, simulation platforms, and others. We also expect to see new use cases for reinforcement learning emerge. And we just might begin to see exciting developments in machine learning methods that aren’t based on neural networks.
5. New developments enable new applications.
Developments in computer vision and speech and voice (“eyes and ears”) technology help drive the creation of new products and services that can make personalized, custom-sized clothing, drive autonomous harvesting robots, or provide the logic for proficient chatbots. Work on robotics (“arms and legs”) and autonomous vehicles is compelling and closer to market.
There’s also a new wave of startups targeting “traditional data” with new AI and automation technologies. This includes text (new NLP and NLU solutions; chatbots), time series and temporal data, transactional data, and logs. And both traditional enterprise software vendors and startups are rushing to build AI applications that target specific industries or domains. This is in line with findings in a recent
webicon_green.png
McKinsey survey: enterprises are using AI in areas where they’ve already invested in basic analytics.
6. Handling fairness—working from the premise that all data has built-in biases.
Taking a cue from the software quality assurance world (which assumes that bugs exist in software and need to be fixed), those working on AI models need to assume their data has built-in or systemic bias and other issues and that formal processes are needed to detect, correct, and address those issues. Detecting bias and ensuring fairness doesn’t come easy and is most effective when subject to review and validation from a diverse set of perspectives.
That means building intentional diversity—cognitive diversity, socioeconomic diversity, cultural diversity, and physical diversity—into the processes used to detect unfairness and bias to help improve the process and mitigate the risk of missing something critical.
7. Machine deception continues to be a serious challenge.
Deepfakes have tells that automated detection systems can look for: unnatural blinking patterns, inconsistent lighting, facial distortion, inconsistencies between mouth movements and speech, and the lack of small but distinct individual facial movements (how Donald Trump purses his lips before answering a question, for example).
But deepfakes are getting better. With 2020 a US election year, automated detection methods will have to be developed as fast as new forms of machine deception are launched. And automated detection may not be enough. Detection models themselves can be used to stay ahead of the detectors. Within a couple months of the release of an algorithm that spots unnatural blinking patterns for example, the next generation of deepfake generators had incorporated blinking into their systems.
Programs that can automatically watermark and identify images when taken or altered or using blockchain technology to verify content from trusted sources could be a partial fix, but as deepfakes improve, trust in digital content diminishes. Regulation may be enacted, but the path to effective regulation that doesn’t interfere with innovation is far from clear.
8. To fully take advantage of AI technologies, you’ll need to retrain your entire organization.
As AI tools become easier to use, AI use cases proliferate, and AI projects are deployed, cross-functional teams are being pulled into AI projects. Data literacy will be required from employees outside traditional data teams—in fact,
webicon_green.png
Gartner expects that 80% of organizations will start to roll out internal data literacy initiatives to upskill their workforce by 2020.
But training is an ongoing endeavor, and to succeed in implementing AI and ML,
webicon_green.png
companies need to take a more holistic approach toward retraining their entire workforces. This may be the most difficult, but most rewarding, process for many organizations to undertake. The opportunity for teams to plug into a broader community on a regular basis to see a wide cross-section of successful AI implementations and solutions is also critical.
Retraining also means rethinking diversity. Reinforcing and expanding on how important diversity is to detecting fairness and bias issues, diversity becomes even more critical for organizations looking to successfully implement truly useful AI models and related technologies. As we expect most AI projects to augment human tasks, incorporating the human element in a broad, inclusive manner becomes a key factor for widespread acceptance and success.
Special thanks to Ben Lorica for his insights and help with this piece.
Share this newsletter
 
Atrás
Arriba