AI and The Future of Work is About Lifelong Learning
I often get asked what are the most important skills for a student to learn going into the coming decade of new AI technology. I have some ideas about why I’m asked this, but it still surprises me how desperate some people are to know the “secret” winning skills of the future.
The World Economic Forum’s own list for 2020 is basically a shuffle of their list from 2015, with complex problem solving at the top of both. And while I don’t know exactly how long it’s been around, I don’t think it’s a particularly new idea that college education is about developing critical thinking skills, learning how to learn, and being able to determine cause and effect in a complex system.
What technology changes is the availability of tools to foster these skills throughout our adult careers in order to make a well-rewarded contribution to the economy. A growing number of adults today are finding time for continuous learning and mindfulness, seeing them as essential practices for being an effective modern worker. In other words, the future of work is what our society’s intellectuals have been saying should be happening in organizations for decades now; only, the traditional organization still forces us to do it outside of work.
Today’s dominant business models have been built around achieving economies of scale that seek to maximize efficiency of narrowly defined tasks (which makes them easy targets for automation). We then allow a select few in the C-Suite to consider strategy, but even executives struggle with fitting in learning and personal development. The moments of study and reflection that lead to clear thinking about the systems the organization is subject to become rare, isolated incidents within the their day jobs of putting out fires and maintaining legacies laid out long before them.
For most established organizations, nearly everyone's focus has become how they can do the same job more efficiently, rather than asking why they are doing what they are doing. If anyone does start questioning the function they are performing, it often means they are leaving that company rather than changing the job itself.
The startup world has done well in part because it's super clear that everyone working in a new company is in a position to design their job and the rest of the organization. At those startups, some of which are the biggest companies in the world now, employees have been empowered with a sense of the strategic direction, their connection with the rest of the system, and the autonomy to redesign their work as they adapt to new information.
At my own startup, Element AI, everyone is creating new intellectual property. At barely a year old, each person is shaping their role and their department, and making a direct impact on the company’s overall form. It can be a scary thing for a lot of people, but it’s an ideal set up when the underlying science of AI technology is moving at such a high speed. We’d be dead in the water if all of our people weren’t in a position to bring daily new developments into their work.
Optimize, simplify, command, and control
The shelf life of a typical organization is 20-30 years. To start one, you would figure out a service that you could get paid as much as possible for, build a market for it, then scale by driving optimization. But as you do that, change becomes more difficult. And expectations are changing. Customers expect you to customize and personalize your product or service, and to be flexible to whatever circumstances come your way. Amazon is a great example of this: While they’ve become the fourth largest company in the world by market capitalization, they’ve remained flexible by design and are continuing to gobble up new markets almost as soon as they emerge. However, Amazon is the exception.
Large organizations are optimizing for things they know, and sometimes deal with things they know they don't know. But they are extremely ill-equipped to deal with what they don’t know they don’t know—the unknown unknowns. The reason companies like Amazon do well in this world is that they are good at sensing, searching and finding new dynamics they didn't even know they needed to know. In other words, the challenge is how we deal with success, and what it is we are scaling.
The focus is too often on the fruits of success, and trying to optimize, simplify, command, and control a solution for a problem that is in fact a moving target. We essentially cut ourselves off from the real cause of success: learning the unknown unknowns and being able to quickly adapt to them.
“Learning & Adapting”; “Flat & Fast”
The cyber-industrial revolution will help, if not necessitate, organizations shift from being 'Commanded & Controlled' to 'Learning & Adapting' and AI technologies are a big enabler of this organizational shift in thinking. Industry 4.0 and technologies like IoT and AI are unlocking new perceptive senses for organizations. It’s like suddenly experiencing the sense of sight when you had been blind before. The possibility to discover and integrate the unknown unknowns is just beginning to spread through industries; and, while reshaping the organizational structure around learning from these new senses is not easy, it is possible.
Ask any digital evangelist, transformation is a tough sell. Then there’s the question of actually doing it, and I think those same evangelists will have more bad stories than good of companies still struggling to get it as AI is rounding the corner to raise the hurdle even higher. Though, I did learn about a convincing example at an Aspen Institute roundtable this summer. It’s striking because it happened within one of the most rigid and hierarchical organizations you can find: the US Military.
In the mid ‘00s, General Stanley McChrystal led the Joint Special Operations Command, which oversaw all the elite units from each of the divisions of the US Military. When McChrystal took over, JSOC was focused on carrying out a limited number of carefully planned operations. Critically, information gleaned from these operations had to be sent back to intelligence analysts in the U.S. for interpretation, which diminished its operational value.
McChrystal decided that JSOC needed to change to become “flat and fast.” He embedded Washington D.C. analysts into the JSOC teams on the front lines and essentially eliminated the information bottleneck between operations and Washington D.C. He also implemented several other policies and practices that expanded the flow of information and increased the autonomy of people to act on that information, substituting out the old long, narrow path of decision making. The long story short is that the JSOC went from making 10 operations a month to 300. (I strongly recommend reading the booklet that is free online here.)
5 Disciplines of the Learning Organization
Systems Thinking - Thinking of the organization as a single system, that is also made up of, and is itself within, many other systems. This way of seeing the world is necessary for understanding cause and effect.
Personal Mastery - Learning happens at the individual level, and so each person must have their own intent and motivation to learn.
Mental Models - We all operate based on fundamental assumptions about our environment. Not only are unconscious assumptions a missed opportunity for learning, certain unchallenged assumptions can directly block learning.
Shared Vision - Having a shared vision that is rooted in the visions of the ground-level teams empowers individuals to align themselves (and their learning) with that vision and quickly be able to act on new information for the better of the company.
Team Learning - Effectively bringing together individual learning can yield more than the simple sum when individuals and teams cross boundaries to engage in dialogue.
Peter Senge’s “fifth discipline” is systems thinking: being able to see the interconnectedness of things and their effects on each other. The organization is complicated because it is a system of systems, and one of the things AI is good at is connecting disparate systems. This AI thing is moving fast. Just to be able to navigate the new, and rising, complexity of AI, we need to be completely open-minded about how we do business.
AI is a double edged sword
On the one hand, AI is making an already tumultuous world magnitudes more complicated by condensing the lead time to many technologies we didn’t expect to be viable for years or even decades. On the other hand, AI is a promising new tool for navigating all that complexity.
We are always operating on assumptions based on our past observations of the world. When a change in the environment occurs, it is extremely costly and time consuming to capture the new rule and scale it across the organization. Initially there is coordinating a response, and then there’s the matter of replicating it across the organization—new training, informing people of the changes—it's super cumbersome to work against rigidly established processes.
With AI, the cost of applying new rules drops dramatically because we can simulate multiple scenarios that take into consideration the vast complexity of the business itself, saving the arduous information-gathering process required for justifying a change in direction. Simple adjustments can be made automatically, leaving more time to consider larger strategic decisions with information not yet captured by the machine (say information from the last board meeting). In either case, the new rules are quickly implemented and also skip human communication distortion, delays, and latency.*
For the organizations who failed to heed the digital evangelists, what I am talking about will require a serious overhaul of their IT systems just to keep up to speed. But, unless you’re a brand-new startup, or Amazon, every company will also need to reshape their organizational structures and incentives for learning and augment their people into knowledge workers** as part of their transformation around AI.
Some difficult changes that come with becoming a learning organization
Companies can no longer reliably depend on a workforce that is only concerned with its immediate environment/department/cubicle. But it’s a two-way street; a workforce also can’t expect a company to keep them gainfully employed if it doesn’t empower them to help shape the strategic direction on a daily basis. Given how we’ve seen individuals adapt to the information age, I think the bigger challenge is for the companies because of three implicit changes that come with becoming an organization designed for the future of work:
Bringing information processing to the front lines
Increasing autonomy in decision making
Changing the incentives to allow risk taking and failure
Bring information processing to the front lines - End the cult of hierarchy and allow for people on the front lines to see the rest of the system and what’s happening in it. Investing in tools that will augment your workforce into knowledge workers will also mean breaking silos and decentralizing the source of information.
We need knowledge workers who can creatively bring together the valuable information that AI can deliver and adjust as it relates to the big-picture narrative. We need to tool our people who are executing decisions to understand the dynamics of the company as a whole. “What are our parameters?" “What are we trying to do?” “What service are we providing to our customers?” Help them to see the big picture through AI-empowered tools that deliver them the relevant actionable info, as well as a common narrative. The people who are only concerned with the relatively small world of their tasks, regardless of what level of the company they are at, are the ones at most risk of being replaced.
Increasing autonomy in decision making - The point is to move on the information when it is still valuable. Question your intuition when it comes to authority, access to information, and decision making. Build trust to flatten the hierarchy.
The job of the leader is to ask better questions. Stop asking about marginal increases in efficiency, which AI will soon be taking care of anyways, and start asking questions about how to guide the systems, construct the right feedback loops that reinforce the results we want to see, and how to eliminate what isn’t useful. Your consumer will actually help you with some of this, but a lot of it is going to be shaped by the organization. The skillset of the future is going to be about asking the right questions. The best people an organization can hire in the future are going to be the people who write down the best hypothesis and figure out very quickly a way to test and resolve that hypothesis. This requires autonomy to experiment and leads to the next point...
Changing the incentives to allow risk taking and failure - Experiment and fail fast. Empower your people to make mistakes, then ramp up what works and kill the projects that are not delivering the results. As we work out the answers for respective industries and companies, we will need to accept failure and being outright wrong. It’s important to know what doesn’t work, just as it is to know what does.
Incentives can help us fail better. We can fail smaller, and we can learn more from our mistakes. By creating incentive structures around failing better, it will prevent counter-productive tendencies like avoiding risk taking, sweeping under the rug the failures that do occur, and offloading blame when trying to understand what really happened.
Society’s transformation around AI
I think crises are good at pushing us temporarily into open-mindedness. For a lot of the world, AI will be a crisis. But with this crisis it’s important to realize this is a permanent change, and our new open-mindedness should be perpetual. There will be some basic changes to make accross the board like Peter Senge’s disciplines of the learning organization. Though, these will manifest in very different ways depending on the organization over the years to come.
We can't rewrite law or reinvent how we do business from scratch. In order to adapt, we need to pick up principles for how we reconsider design thinking, translate our current law and policy, and even change the way we see the world and each other ahead of this fast moving, world-changing technology.
This kind of principles-first thinking is what I see being needed across our society dealing with the impact of AI. Technology does not appear in its own vacuum, it emerges from an environment of particular laws, ethics, social equality, externalities, and all the other ways we conduct our business. While technology does impact these conditions, it cannot alone fix their inherent problems, and is far more likely to only amplify them unless we also use technology as an opportunity to change our thinking.
*This will destroy a lot of the jobs (a.k.a. the links in the communication chain) and likely lead to middle management becoming the front-line workers. This is a big problem that all sectors of society need to think hard about how to handle, and I plan on coming back to it in more detail in the near future.
**A knowledge worker is not someone who just knows a lot of facts, they can think creatively about the facts and connect the dots to create new ideas and new IP.
Photo by Ghost Presenter.