Explicit programming is what most people imagine when they think about what programmers do – identify a problem then create a programming algorithm to solve that problem. Deep learning, however, is changing all of this. Deep learning is a process of programming modeled on how the human mind works. Even though interacting with your favorite sci-fi robot personalities may still be decades away, deep learning has made strides in artificial intelligence circles that have enabled successful self-driving cars and highly accurate language translators. Matt Borstein, principal at Blumberg Capital, offers his assessment of how deep learning will further influence programming and the coming AI revolution.
Programming and data science will increasingly converge. Most software will not incorporate “end-to-end” learning systems for the foreseeable future. They will rely on data models to provide core cognition capabilities and explicit logic to interface with users and interpret results. The question “should I use AI or a traditional approach to this problem?” will increasingly come up. Designing intelligent systems will require mastery of both.
AI practitioners will be rock stars. Doing AI is hard. Rank-and-file AI developers – not just brilliant academics and researchers – will be among the most valuable resources for software companies in the future. This carries a touch of irony for traditional coders, who have automated work in other industries since the 1950s and who now face partial automation of their own jobs. Demand for their services will certainly not decline, but those who want to remain at the forefront must, with a healthy dose of skepticism, test the waters in AI.
The AI toolchain needs to be built. Gil Arditi, machine learning lead at Lyft, said it best. “Machine learning is in the primordial soup phase. It’s similar to a database in the early ‘80s or late ‘70s – You really had to be a world’s expert to get these things to work.” Studies also show that many AI models are difficult to explain, trivial to deceive and susceptible to bias. Tools to address these issues, among others, will be necessary to unlock the potential of AI developers.
We all need to get comfortable with unpredictable behavior. The metaphor of a computer “instruction” is familiar to developers and users alike. It reinforces the belief that computers do exactly what we say and that similar inputs always produce similar outputs. AI models, by contrast, act like living, breathing systems. New tooling will make them behave more like explicit programs, especially in safety-critical settings, but we risk losing the value of these systems – like AlphaGo’s “alien” moves – if we set the guardrails too tightly. As we develop and use AI applications, we need to understand and embrace probabilistic outcomes.
As AI is the future of programming, we will soon see an evolution of different fields where direct input doesn’t necessarily lead to the same output. Is computing prepared to accept “maybe” as an answer from a deep learning computer? Only time will tell but the future of AI software is certainly promising.