The current wave of advances in artificial intelligence doesn’t actually bring us intelligence but instead a critical component of intelligence: prediction.
Prediction is a central input into decision-making. Economics can help us understand the impact of advances in prediction technology by leveraging decision theory.
There is no single best AI strategy or the best set of AI tools, because AIs involve trade-offs: more speed, less accuracy; more autonomy, less control; more data, less privacy.
When the price of something falls, we use more of it. To understand AI’s impact, we need to know precisely what price has changed and how that price change will cascade throughout the broader economy.
This change in the relative costs of certain activities can radically influence some companies’ business models and even transform some industries.
When the price of something fundamental drops drastically, the whole world can change.
Technological change makes things cheap that were once expensive. AI will be economically significant precisely because it will make something important (e.g. Prediction) much cheaper.
Not only are we going to start using a lot more prediction, but we are also going to see it emerge in surprising new places.
Prediction is the process of filling in missing information. Prediction takes information you have, often called “data,” and uses it to generate information you don’t have.
Cheaper prediction will mean more predictions. Prediction is being used for traditional tasks, like inventory management and demand forecasting. More significantly, because it is becoming cheaper it is being used for problems that were not traditionally prediction problems (e.g. Navigation and Transportation).
Critically, when an input such as prediction becomes cheap, this can enhance the value of other things called complements (data, judgment, and action) and diminish the value of substitutes (human prediction). For autonomous vehicles, a drop in the cost of prediction increases the value of sensors to capture data on the vehicle’s surroundings.
When prediction is cheap, there will be more prediction and more complements to prediction.
At low levels, a prediction machine can relieve humans of predictive tasks and so save on costs.
As the machine cranks up, prediction can change and improve decision-making quality. But at some point, a prediction machine may become so accurate and reliable that it changes how an organization does things.
Some AIs will affect the economics of a business so dramatically that they will no longer be used to simply enhance productivity in executing against the strategy; they will change the strategy itself. For instance, if Amazon can predict what shoppers want, then they may move from a shop-then-ship model to a ship-then-shop model—bringing goods to homes before they are ordered. Such a shift will transform the organization.
Innovations in prediction technology are having an impact on areas traditionally associated with forecasting, such as fraud detection. E.g. credit card fraud detection has improved so much that credit card companies detect and address fraud before we notice anything amiss. Machine Learning improved fraud detection from 98–99.9 percent today.The change might seem incremental, but small changes are meaningful if mistakes are costly.
The drop in the cost of prediction is transforming many human activities. In addition to fraud detection, these included creditworthiness, health insurance, and inventory management. Creditworthiness involved predicting the likelihood that someone would pay back a loan. Health insurance involved predicting how much an individual would spend on medical care. Inventory management involved predicting how many items would be in a warehouse on a given day.
Prediction is a foundational input. Predictions are everywhere. Often our predictions are hidden as inputs into decision making. Better prediction means better information, which means better decision making.
Prediction is “intelligence” in the sense of “obtaining of useful information.” (artificially generated). Better predictions lead to better outcomes.
As the cost of prediction continues to fall, it will be useful for a remarkably broad range of activities like machine language translation, that were previously unimaginable.
Machine learning science had different goals from statistics. Whereas statistics emphasized being correct on average, machine learning did not require that. Instead, the goal was operational effectiveness.
Predictions could have biases so long as they were better (something that was possible with powerful computers).
Traditional statistical methods require the articulation of hypotheses or at least of human intuition for model specification.
Machine learning has less need to specify in advance what goes into the model and can accommodate the equivalent of much more complex models with many more interactions between variables.
(1) systems predicated on ML learn and improve over time;
(2) these systems produce significantly more-accurate predictions than other approaches under certain conditions, and some experts argue that prediction is central to intelligence; and
(3) the enhanced prediction accuracy of these systems enable them to perform tasks, such as translation and navigation, that were previously considered the exclusive domain of human intelligence.
Prediction machines rely on data. More and better data leads to better predictions. In economic terms, data is a key complement to prediction. It becomes more valuable as prediction becomes cheaper.
With AI, data plays three roles.
First is input data, which is fed to the algorithm and used to produce a prediction.
Second is training data, which is used to generate the algorithm in the first place. Training data is used to train the AI to become good enough to predict in the wild.
Finally, there is feedback data, which is used to improve the algorithm’s performance with experience.
But data can be costly to acquire. The cost of data collection depends on how much data you need and how intrusive the collection process is. It is critical to balance the cost of data acquisition with the benefit of enhanced prediction accuracy. Determining the best approach requires estimating the ROI of each type of data: how much will it cost to acquire, and how valuable will the associated increase in prediction accuracy be?
To make the right data investment decisions, you must understand how prediction machines use data. The particular prediction problem will determine the data needs.
How many different types of data do you need?
How many different objects are required for training?
How frequently do you need to collect data?
More types, more objects, and more frequency mean higher cost but also potentially higher benefit.
More data improves prediction. But how much data do you need? The benefit of additional information (whether in terms of number of units, types of variables, or frequency) may increase or decrease with the existing amount of data.
Data may have increasing or decreasing returns to scale. From a purely statistical point of view, data has decreasing returns to scale. As you add observations to your training data, it becomes less and less useful to improving your prediction.
This might not be true from an economic point of view, which is not about how data improves prediction. It is about how data improves the value you get from the prediction. Sometimes prediction and outcome go together, so the decreasing returns to observations in statistics imply decreasing returns in terms of the outcomes you care about. Sometimes, however, they are different.
So, while the data technically has decreasing returns to scale—from a business viewpoint, data might be most valuable if you have more and better data than your competitor. Some have argued that more data about unique factors brings disproportionate rewards in the market. Increasing data brings disproportionate rewards in the market. Thus, from an economic point of view, in such cases data may have increasing returns to scale.
Division of labor involves allocating roles based on relative strengths. Here, the division of labor is between humans and machines in generating predictions.
Humans and machines both have failings. Without knowing what they are, we cannot assess how machines and humans should work together to generate predictions.
With rich data, machine prediction can work well. The machine knows the situation, in the sense that it supplies a good prediction. And we know the prediction is good. This is the sweet spot for the current generation of machine intelligence. Fraud detection, medical diagnosis, baseball players, and bail decisions all fall under this category.
Many people find it challenging to make predictions based on sound statistical principles, which is precisely why they bring in experts. Unfortunately, those experts can exhibit the same biases and difficulties with statistics when making decisions. These biases plague fields as diverse as medicine, law, sports, and business. Prediction proves so difficult for humans because of the complexity of the factors.
Prediction machines are much better than humans at factoring in complex interactions among different indicators.
Even the best prediction models of today (and in the near future) require large amounts of data, meaning we know our predictions will be relatively poor in situations where we do not have much data. We know that we don’t know: known unknowns. We might not have much data because some events are rare, so predicting them is challenging.
In contrast to machines, humans are sometimes extremely good at prediction with little data. Because these are known unknowns and because humans are still better at decisions in the face of known unknowns, the people managing the machine know that such situations may arise and thus they can program the machine to call a human for help.
In order to predict, someone needs to tell a machine what is worth predicting. If something has never happened before, a machine cannot predict it (at least without a human’s careful judgment to provide a useful analogy that allows the machine to predict using information about something else). We cannot predict truly new events from past data.
Humans are also relatively bad at predicting unknown unknowns. Faced with unknown unknowns, both humans and machines fail.
Unknown knowns, is when an association that appears to be strong in the past is the result of some unknown or unobserved factor that changes over time and makes predictions we thought we could make unreliable. Prediction machines fail precisely where it is hard to predict based on the well-understood limits in statistics.
With unknown knowns, prediction machines appear to provide a very precise answer, but that answer can be very wrong. Because, while data informs decisions, data can also come from decisions. If the machine does not understand the decision process (“reverse causality” and “omitted variables”) that generated the data, its predictions can fail.
The issue of unknown knowns and causal inference is even more important in the presence of others’ strategic behavior.
Sometimes, the combination of humans and machines generates the best predictions, each complementing the other’s weaknesses.
Machine prediction can enhance the productivity of human prediction via two broad pathways.
The first is by providing an initial prediction that humans can use to combine with their own assessments.
The second is to provide a second opinion after the fact, or a path for monitoring.
One major benefit of prediction machines is that they can scale in a way that humans cannot. One downside is that they struggle to make predictions in unusual cases for which there isn’t much historical data. Combined, this means that many human-machine collaborations will take the form of “prediction by exception.”
Prediction machines learn when data is plentiful, which happens when they are dealing with more routine or frequent scenarios. In these situations, the prediction machine operates without the human partner expending attention. By contrast, when an exception arises—a scenario that is non-routine—it is communicated to the human, and then the human puts in more effort to improve and verify the prediction.
We handle many of our smaller decisions on autopilot, perhaps by accepting the default, choosing to focus all our attention on bigger decisions. However, deciding not to decide is still a decision. Decision making is at the core of most occupations. Decisions usually occur under conditions of uncertainty.
Prediction is not a decision. Making a decision requires applying judgment to a prediction and then acting. Humans usually always perform prediction and judgment together.
Prediction machines will have their most immediate impact at the decision level. But decisions have six other key elements. When someone (or something) makes a decision, they take input data from the world that enables a prediction. That prediction is possible because training occurred about relationships between different types of data and which data is most closely associated with a situation. Combining the prediction with judgment on what matters, the decision maker can then choose an action. The action leads to an outcome (which has an associated reward or payoff). The outcome is a consequence of the decision. It is needed to provide a complete picture. The outcome may also provide feedback to help improve the next prediction.
Most clearly, for prediction itself, a prediction machine is generally a better substitute for human prediction. As machine prediction increasingly replaces the predictions that humans make, the value of human prediction will decline. But a key point is that, while prediction is a key component of any decision, it is not the only component. The other elements of a decision—judgment, data, and action—remain, for now, firmly in the realm of humans.
They are complements to prediction, meaning they increase in value as prediction becomes cheap. For example, we may be more willing to exert effort by applying judgment to decisions where we previously had decided not to decide (e.g., accepted the default) because prediction machines now offer better, faster, and cheaper predictions. In that case, the demand for human judgment will increase.
What decision should you make? This is where judgment comes in. Judgment is the process of determining the reward to a particular action in a particular environment. It is about working out the objective you’re actually pursuing. Judgment involves determining what we call the “reward function,” the relative rewards and penalties associated with taking particular actions that produce particular outcomes.
As prediction becomes better, faster, and cheaper, we’ll use more of it to make more decisions, so we’ll also need more human judgment and thus the value of human judgment will go up.
Having better prediction raises the value of judgment. Prediction machines don’t provide judgment. Only humans do, because only humans can express the relative rewards from taking different actions.
As AI takes over prediction, humans will do less of the combined prediction-judgment routine of decision making and focus more on the judgment role alone.
With better prediction come more opportunities to consider the rewards of various actions—in other words, more opportunities for judgment. And that means that better, faster, and cheaper prediction will give us more decisions to make.
The promise of AI is that it can make prediction much more precise, especially in situations with a mix of generic and personalized information.
But until prediction machines become perfect at prediction, companies will have to figure out the costs of errors, which requires judgment. Uncertainty increases the cost of judging the payoffs for a given decision.
Figuring out the relative payoffs for different actions in different situations takes time, effort, and experimentation.
However, the rise of prediction machines increases the returns to understanding the logic and motivation for payoff values.
A machine has fundamental limitations about how much it can learn to predict your preferences. Humans have, explicitly and implicitly, their own knowledge of why they are doing something, which gives them weights that are both idiosyncratic and subjective.
While a machine predicts what is likely to happen, humans will still decide what action to take based on their understanding of the objective.
As prediction machines provide better and cheaper predictions, we need to work out how to best use those predictions. Whether or not we can specify judgment in advance, someone needs to determine the judgment.
If there are a manageable number of action-situation combinations associated with a decision, then we can transfer the judgment from ourselves to the prediction machine (this is “reward function engineering”) so that the machine can make the decision itself once it generates the prediction. This enables automating the decision.
Often, however, there are too many action-situation combinations, such that it is too costly to code up in advance all the payoffs associated with each combination, especially the very rare ones. In these cases, it is more efficient for a human to apply judgment after the prediction machine predicts.
Many decisions are complex and predicated on judgment that is not easily codified. The machine may learn to predict human judgment by observing many examples. Human trainers help AIs become good enough so that humans gradually become unnecessary for many aspects of a task. This is particularly important when the AI is automating a process with very little tolerance for error. A human may supervise the AI and correct mistakes. Over time, the AI learns from its mistakes until human correction is unnecessary.
There are limits to the ability of machines to predict human judgment. The limits relate to lack of data. We know some things that the machines don’t (yet), and, more importantly, we are better at deciding what to do when there isn’t much data.
Humans have three types of data that machines don’t.
First, human senses are powerful. In many ways, human eyes, ears, nose, and skin still surpass machine capabilities.
Second, humans are the ultimate arbiters of our own preferences.
Third, privacy concerns restrict the data available to machines.
Humans can provide two main solutions to the problem of not having much data: experiments and modeling. If the situation arises often enough, you can run a randomized control trial.
Modeling, an alternative to experiments, involves having a deep understanding of the situation and the process that generated the data observed. It is particularly useful when experiments are impossible because the situation doesn’t arise often enough or the cost of an experiment is too high.
Enhanced prediction enables decision makers, whether human or machine, to handle more “ifs” and more “thens.”
Better prediction allows you to predict more things more often, reducing uncertainty.
Each new prediction also has an indirect effect: it makes choices feasible that you would not have considered before. And you don’t have to explicitly code the “ifs” and “thens.” You can train the prediction machine with examples. Problems that were not previously understood as prediction problems may now be tackled as such.
In the absence of good prediction, we do a lot of “satisficing,” making decisions that are “good enough” given the information available. That solution is not optimal, but it’s good enough given the information available.
We are so used to satisficing in our businesses and in our social lives that it will take practice to imagine the vast array of transformations possible as a result of prediction machines that can handle more “ifs” and “thens” and, thus, more complex decisions in more complex environments.
Prediction machines will provide new and better methods for managing risk.
Often, the distinction between AI and automation is muddy. Automation arises when a machine undertakes an entire task, not just prediction.
AI, in its current incarnation, involves a machine performing one element: prediction. Each of the other elements represents a complement to prediction, something that becomes more valuable as prediction gets cheaper.
Whether full automation makes sense depends on the relative returns to machines also doing the other elements.
Humans and machines can accumulate data, whether for input, training, or feedback, depending on the data type. A human must ultimately make a judgment, but the human can codify judgment and program it into a machine in advance of a prediction. Or a machine can learn to predict human judgment through feedback. This brings us to the action.
We must determine the returns to machines performing the other elements (data collection, judgment, actions) to decide whether a task should be or will be fully automated.
If the final human element in a task is prediction, then once a prediction machine can do as well as a human, a decision maker can remove the human from the equation.
No Time or Need to Think
When you employ a prediction machine, the prediction made must be communicated to the decision maker. Automation can also arise when the costs of communication are high or there isn’t enough time to communicate the prediction to the human.
But if the prediction leads directly to an obvious course of action (“no need to think”), then the case for leaving human judgment in the loop is diminished. If a machine can be coded for judgment and handle the consequent action relatively easily, then it makes sense to leave the entire task in the machine’s hands.
When the Law Requires a Human to Act
When a human is best suited to take the action, such decisions will not be fully automated. At other times, prediction is the key constraint on automation. When the prediction gets good enough and judging the payoffs can be pre-specified—either a person does the hard coding or a machine learns by watching a person—then a decision will be automated.
The amount of “externalities” determine the extent of automation.
Like classical computing, AI is a general-purpose technology. It has the potential to affect every decision, because prediction is a key input to decision making.
The actual implementation of AI is through the development of tools. The unit of AI tool design is not “the job” or “the occupation” or “the strategy,” but rather “the task.”
Tasks are collections of decisions. Decisions are based on prediction and judgment and informed by data.
The decisions within a task often share these elements in common. Where they differ is in the action that follows.
Sometimes we can automate all the decisions within a task. Or we can now automate the last remaining decision that has not yet been automated because of enhanced prediction.
For many businesses, prediction machines will be impactful, but in an incremental and largely inconspicuous manner.
AI tools are point solutions. Each generates a specific prediction, and most are designed to perform a specific task. Many AI startups are predicated on building a single AI tool.
AI tools can change work flows in two ways. First, they can render tasks obsolete and therefore remove them from work flows. Second, they can add new tasks. This may be different for every business and every work flow.
Automating workflow tasks
Large corporations are comprised of work flows that turn inputs into outputs.
Work flows are made up of tasks. In deciding how to implement AI, companies will break their work flows down into tasks, estimate the ROI for building or buying an AI to perform each task, rank-order the AIs in terms of ROI, and then start from the top of the list and begin working downward.
Sometimes a company can simply drop an AI tool into their work flow and realize an immediate benefit due to increasing the productivity of that task.
Deriving a real benefit from implementing an AI tool requires rethinking, or “reengineering” the entire work flow.
As a result, similar to the personal computer revolution, it will take time to see productivity gains from AI in many mainstream businesses.
The current generation of AI provides tools for prediction and little else.
But how should you decide whether you should use an AI tool for a particular task in your business?
Tasks need to be decomposed in order to see where prediction machines can be inserted. This allows you to estimate the benefit of the enhanced prediction and the cost of generating that prediction.
Once you have generated reasonable estimates, rank-order the AIs from highest to lowest ROI by starting at the top and working your way down, implementing AI tools as long as the expected ROI makes sense.
The AI canvas is an aid to help with the decomposition process.
At the center of the AI canvas is prediction. You need to identify the core prediction at the heart of the task, and this can require AI insight.
The effort to answer this question often initiates an existential discussion among the leadership team: “What is our real objective, anyhow?” Prediction requires a specificity not often found in mission statements.
Companies often find themselves having to go back to basics to realign on their objectives and sharpen their mission statement as a first step in their work on their AI strategy.
Using the AI Canvas
- Fill out the AI canvas for every decision or task.
- Identify all three data types required: training, input, and feedback.
- Articulate precisely what you need to predict, the judgment required to assess the relative value of different actions and outcomes, the action possibilities, and the outcome possibilities.
ACTION: What are you trying to do?
PREDICTION: What do you need to know to make the decision?
JUDGMENT: How do you value different outcomes and errors?
OUTCOME: What are your metrics for task success?
INPUT: What data do you need to run the predictive algorithm?
TRAINING: What data do you need to train the predictive algorithm? Atomwise employs data on the binding affinity of molecules and proteins, along with molecule and protein characteristics.
FEEDBACK: How can you use the outcomes to improve the algorithm? Atomwise uses test outcomes, regardless of their success, to improve future predictions.
A job is a collection of tasks. When breaking down a work flow and employing AI tools, some tasks previously performed by humans may be automated, the ordering and emphasis of remaining tasks may change, and new tasks may be created. Thus, the collection of tasks that make up a job can change.
The automation of tasks forces us to think more carefully about what really constitutes a job, what people are really doing.
In some cases, the goal is to fully automate every task associated with a job. AI tools are unlikely to be a catalyst for this on their own because work flows amenable to full automation have a series of tasks involved that cannot be (easily) avoided, even for tasks that seem initially to be both low skilled and unimportant.
One failed piece can derail the entire exercise. You need to consider every step. Those small tasks may be very difficult missing links in automation and fundamentally constrain how to reformulate jobs. Thus, AI tools that address these missing links can have substantive effects.
The implementation of AI tools generates four implications for jobs:
- AI tools may augment jobs, as in the example of spreadsheets and bookkeepers.A job is augmented when machines take over some, but not all, tasks—is likely to become quite common as a natural consequence of the implementation of AI tools. The tasks that make up a job will change.
- AI tools may contract jobs, as in fulfillment centers.
- AI tools may lead to the reconstitution of jobs, with some tasks added and others taken away, as with radiologists.
- AI tools may shift the emphasis on the specific skills required for a particular job, as with school bus drivers.
AI tools may shift the relative returns to certain skills and, thus, change the types of people who are best suited to particular jobs.
C-suite leadership must not fully delegate AI strategy to their IT department because powerful AI tools may go beyond enhancing the productivity of tasks performed in the service of executing against the organization’s strategy and instead lead to changing the strategy itself.
AI can lead to strategic change if three factors are present:
(1) there is a core trade-off in the business model (e.g., shop-then-ship versus ship-then-shop);
(2) the trade-off is influenced by uncertainty (e.g., higher sales from ship-then-shop are outweighed by higher costs from returned items due to uncertainty about what customers will buy);
(3) an AI tool that reduces uncertainty tips the scales of the trade-off so that the optimal strategy changes from one side of the trade to the other (e.g., an AI that reduces uncertainty by predicting what a customer will buy tips the scale such that the returns from a ship-then-shop model outweigh those from the traditional model).
Another reason C-suite leadership is required for AI strategy is that the implementation of AI tools in one part of the business may also affect other parts.
Prediction machines will increase the value of complements, including judgment, actions, and data.
The increasing value of judgment may lead to changes in organizational hierarchy—there may be higher returns to putting different roles or different people in positions of power.
In addition, prediction machines enable managers to move beyond optimizing individual components to optimizing higher-level goals and thus make decisions closer to the objectives of the organization.
Owning the actions affected by prediction can be a source of competitive advantage that allows traditional businesses to capture some of the value from AI.
However, in some cases, where powerful AI tools provide a significant competitive advantage, new entrants may vertically integrate into owning the action and leverage their AI as a basis for competition.
Uncertainty has an impact on a business’s boundaries. Lower cost versus more control is a core trade-off. That trade-off is mediated by uncertainty; specifically, the returns to control increase with the level of uncertainty.
The trade-off between short- and long-term performance and routine versus non-routine events is resolved by a key organizational choice: how much to rely on external suppliers. But the salience of that choice is closely related to uncertainty. Because prediction machines reduce uncertainty, they can influence the boundary between your organization and others.
By reducing uncertainty, prediction machines increase the ability to write contracts, and thus increase the incentive for companies to contract out both capital equipment and labor that focuses on data, prediction, and action. However, prediction machines decrease the incentive for companies to contract out labor that focuses on judgment.
Judgment quality is hard to specify in a contract and difficult to monitor. If judgment could be well specified, then it could be programmed and we wouldn’t need humans to provide it. Since judgment is likely to be the key role for human labor as AI diffuses, in-house employment will rise and contracting out labor will fall.
AI will increase incentives to own data. Still, contracting out for data may be necessary when the predictions that the data provides are not strategically essential to your organization. In such cases, it may be best to purchase predictions directly rather than purchase data and then generate your own predictions.
AI might enable machines to operate in more complex environments. It expands the number of reliable “ifs,” thus lessening a business’s need to own its own capital equipment, for two reasons.
First, more “ifs” means that a business can write contracts to specify what to do if something unusual happens.
Second, AI-driven prediction prediction gives us many more “ifs” that we can use to clearly specify the “thens.”
It is not clear what the impact on outsourcing would be since better prediction drives more outsourcing, while more complexity tends to reduce it.
When performance measures change from objective to subjective, human resource (HR) management becomes more complex. Job responsibilities have to become less explicit and more relational.
You will evaluate and reward employees based on subjective processes, such as performance reviews that take into account the complexity of the tasks and the employees’ strengths and weaknesses. Such processes are tough to implement because reliance on them to create incentives for good performance requires a great deal of trust.
The direct implication of this line of economic logic is that AI will shift HR management toward the relational and away from the transactional.
The reason is twofold. First, human judgment, where it is valuable, is utilized because it is difficult to program such judgment into a machine. The rewards are either unstable or unknown, or require human experience to implement.
Second, to the extent that human judgment becomes more important when machine predictions proliferate, such judgment necessarily involves subjective means of performance evaluation. If objective means are available, chances are that a machine could make such judgment without the need for any HR management. Thus, humans are critical to decision making where the goals are subjective. For that reason, the management of such people will likely be more relational.
The importance of judgment means that employee contracts need to be more subjective.
The forces affecting capital equipment also affect labor. If the key outputs of human labor are data, predictions, or actions, then using AI means more outsourced contract labor, just as it means more outsourced equipment and supplies. As with capital, better prediction gives more “ifs” that we can use to clearly specify the “thens” in an outsourcing contract.
However, the more important effect on labor will be the increasing importance of human judgment. Prediction and judgment are complements, so better prediction increases the demand for judgment, meaning that your employees’ main role will be to exercise judgment in decision making.
This, by definition, cannot be well specified in a contract. Here, the prediction machine increases uncertainty in the strategic dilemma because evaluating the quality of judgment is difficult, so contracting out is risky.
Counterintuitively, better prediction increases the uncertainty you have over the quality of human work performed: you need to keep your reward function engineers and other judgment-focused workers in house.
Another critical strategic issue is the ownership and control of data.
Just as the consequences for workers relate to the complementarity between prediction and judgment, the relationship between prediction and data also drives these trade-offs. Data makes prediction better.
For AI startups, owning the data that allows them to learn is particularly crucial. Otherwise, they will be unable to improve their product over time.
Companies buy data because they can’t collect it themselves.
Unique data is important for creating strategic advantage. If data is not unique, it is hard to build a business around prediction machines. Without data, there is no real pathway to learning, so AI is not core to your strategy.
Better prediction may help an organization, even if the data and predictions are not likely to be sources of strategic advantage. Both the data and the prediction are outside the boundaries of the organization, but it can still use prediction.
The main implication here is that data and prediction machines are complements. Thus, procuring or developing an AI will be of limited value unless you have the data to feed it. If that data resides with others, you need a strategy to get it.
If the data resides with an exclusive or monopoly provider, then you may find yourself at risk of having that provider appropriate the entire value of your AI. If the data resides with competitors, there may be no strategy that would make it worthwhile to procure it from them. If the data resides with consumers, it can be exchanged in return for a better product or higher-quality service.
However, in some situations, you and others might have data that can be of mutual value; hence, a data swap may be possible. In other situations, the data may reside with multiple providers, in which case, you might need some more complicated arrangement of purchasing a combination of data and prediction.
Whether you collect your own data and make predictions or buy them from others depends on the importance of prediction machines to your company. If the prediction machine is an input that you can take off the shelf, then you can treat it like most companies treat energy and purchase it from the market, as long as AI is not core to your strategy. In contrast, if prediction machines are to be the center of your company’s strategy, then you need to control the data to improve the machine, so both the data and the prediction machine must be in house.
AI-first means devoting resources to data collection and learning (a longer-term objective) at the expense of important short-term considerations such as immediate customer experience, revenue, and user numbers.
Adopting an AI-first strategy is a commitment to prioritize prediction quality and to support the machine learning process, even at the cost of short-term factors such as consumer satisfaction and operational performance.
Gathering data might mean deploying AIs whose prediction quality is not yet at optimal levels. The central strategic dilemma is whether to prioritize that learning or instead shield others from the performance sacrifices that entails.
This is the classic “innovator’s dilemma,” whereby established firms do not want to disrupt their existing customer relationships, even if doing so would be better in the long run.
The innovator’s dilemma occurs because, when they first appear, innovations might not be good enough to serve the customers of the established companies in an industry, but they may be good enough to provide a new startup with enough customers in some niche area to build a product. Over time, the startup gains experience. Eventually, the startup has learned enough to create a strong product that takes away its larger rival’s customers. By that point, the larger company is too far behind, and the startup eventually dominates.
AI requires learning, and startups may be more willing to invest in this learning than their more established rivals.
The innovator’s dilemma is less of a dilemma when the company in question faces tough competition, especially if that competition comes from new entrants that do not face constraints associated with having to satisfy an existing customer base. In that situation, the threat of the competition means that the cost of doing nothing is too high. Such competition tips the equation toward adopting the disruptive technology quickly even if you are an established company. Put differently, for technologies like AI where the long-term potential impact is likely to be enormous, the whiff of disruption may drive early adoption, even by incumbents.
Learning can take a great deal of data and time before a machine’s predictions become reliably accurate. It will be a rare instance indeed when a prediction machine just works off the shelf. Someone selling you an AI-powered piece of software may have already done the hard work of training. But when you want to manage AI for a purpose core to your own business, no off-the-shelf solution is likely. You won’t need a user manual so much as a training manual. This training requires some way for the AI to gather data and improve.
Learning-by-using is a term that economic historian Nathan Rosenberg coined to describe the phenomenon whereby firms improve their product design through interactions with users.
Supervised learning - you use this technique when you already have good data on what you are trying to predict;
When you do not have good data on what you are trying to predict, but you can tell, after the fact, how right you were? In that situation, computer scientists deploy techniques of reinforcement learning.
Those familiar with software development know that code needs extensive testing to locate bugs. In some situations, companies release the software to users to help find the bugs that might emerge in ordinary use. Whether by “dog fooding” (forcing early versions of software to be used internally) or “beta testing” (inviting early adopters to test the software), these forms of learning-by-using involve a short-term investment in learning to enable the product to improve over time.
This short-term cost of training for a longer-term benefit is similar to the way humans learn to do their jobs better.
Companies design systems to train new employees until they are good enough and then deploy them into service, knowing they will improve as they learn from experience doing their job. But determining what constitutes good enough is a critical decision. In the case of prediction machines, it can be a major strategic decision regarding timing: when to shift from in-house training to on-the-job learning.
There are no ready answers for what constitutes good enough for prediction machines, only trade-offs. Success with prediction machines will require taking these trade-offs seriously and approaching them strategically.
First, what tolerance do people have for error? We have high tolerance for error with some prediction machines and low tolerance for others.
Second, how important is capturing user data in the real world? Understanding that training might take a prohibitively long time,
Machines learn faster with more data, and when machines are deployed in the wild, they generate more data. However, bad things can happen in the real world and damage the company brand. Putting products in the wild earlier accelerates learning but risks harming the brand (and perhaps the customer); putting them out later slows learning but allows for more time to improve the product in house and protect the brand (and, again, perhaps the customer).
One intermediate step to soften this trade-off is to use simulated environments. A similar approach is available for AI.
One form of this approach is called adversarial machine learning, which pits the main AI and its objective against another AI that tries to foil that objective.
Such simulated learning approaches cannot take place on the ground; they require something akin to a laboratory approach that produces a new machine learning algorithm that is then copied and pushed out to users. The advantage is that the machine is not trained in the wild, so the risk to the user experience, or even to the users themselves, is mitigated.
The disadvantage is that simulations may not provide sufficiently rich feedback, reducing, but not eliminating, the need to release the AI early. Eventually, you have to let the AI loose in the real world.
Learning in the wild improves the AI. The company can then use real-world outcomes that the prediction machine experiences to improve the predictions for next time. Often, a company collects data in the real world, which refines the machine before it releases an updated prediction model.
Learning in the cloud has the advantage of shielding users from undertrained versions. The downside, however, is that the common AI that resides on devices cannot take into account rapidly changing local conditions or, at the very least, can only do so when that data is built into a new generation. Thus, from the perspective of a user, improvements come in jumps.
By contrast, imagine if the AI could learn on the device and improve in that environment. It could then respond more readily to local conditions and optimize itself for different environments. In environments where things change rapidly, it is beneficial to improve the prediction machines on the devices themselves.
Companies must trade off how quickly they should use a prediction machine’s experience in the real world to generate new predictions. Use that experience immediately and the AI adapts more quickly to changes in local conditions, but at the cost of quality assurance.
Learning often requires customers who are willing to provide data.
The relative payoffs associated with trading people’s privacy concerns for predictive accuracy will guide the ultimate strategic choice.
Enhanced privacy might give companies permission to learn about consumers but may also mean the learning is not particularly useful.
There are no easy ways to overcome the trade-off that arises when prediction alters crowd behavior, thereby denying AI of the very information it needs to form the correct prediction. In this instance, the needs of the many outweigh the needs of the few or the one. But this is certainly not a comfortable way of thinking about managing customer relationships.
Sometimes, to improve products, especially when they involve learning-by-using, it is important to jolt the system so that consumers actually experience something new that the machine can learn from.
Customers who are forced into that new environment often have a worse experience, but everyone else benefits from those experiences.
For beta testing, the trade-off is voluntary, as customers opt into the early versions. But beta testing may attract customers who do not use the product the same way as your general customers would.
To gain experience about all your customers, you may sometimes need to degrade the product for those customers in order to get feedback that will benefit everyone.
The scarcity of experience becomes even more salient when you consider the experience of your human resources. If the machines get the experience, then the humans might not. Recently, some expressed concern that automation could result in the deskilling of humans.
There are plenty of situations in which automation creates no such paradox.
The solutions involve ensuring that humans gain and retain skills, reducing the amount of automation to provide time for human learning. In effect, experience is a scarce resource, some of which you need to allocate to humans to avoid deskilling.
The reverse logic is also true. To train prediction machines, having them learn through the experience of potentially catastrophic events is surely valuable. But if you put a human in the loop, how will that machine’s experience emerge? And so another trade-off in generating a pathway to learning is between human and machine experience.
The emergence of racial profiling is a societal issue, but also a potential problem for companies. They may run afoul of employment antidiscrimination rules.
Discrimination might emerge in even subtler ways. While many tend to think of discrimination as arising from disparate treatment—setting different standards for men and women—the ad-placement differences might result in what lawyers call “disparate impact.” A gender-neutral procedure turns out to affect some employees who might have reason to fear discrimination (a “protected class” to lawyers) differently from others.
A person or an organization can be liable for discrimination, even if it is accidental. A challenge with AI is that such unintentional discrimination can happen without anyone in the organization noticing.
Predictions generated by deep learning and many other AI technologies appear to be created from a black box. It isn’t feasible to look at the algorithm or formula underlying the prediction and identify what causes what. To figure out if AI is discriminating, you have to look at the output.
To prevent liability issues (and to avoid being discriminatory), if you discover unintentional discrimination in the output of your AI, you need to fix it. You need to figure out why your AI generated discriminatory predictions.
The point is that the black box of AI is not an excuse to ignore potential discrimination or a way to avoid using AI in situations where discrimination might matter. Plenty of evidence shows that humans discriminate even more than machines. Deploying AI requires additional investments in auditing for discrimination, then working to reduce any discrimination that results.
Algorithmic discrimination can easily emerge at the operational level but can end up having strategic and broader consequences. Strategy involves directing those in your organization to weigh factors that might not otherwise be obvious. This becomes particularly salient with systematic risks, like algorithmic discrimination, that may have a negative impact on your business.
AI which does not rely on causal experimentation but on correlation can easily fall into the same traps as anyone using data and simple statistics can.
Unknown knowns are a key weakness of prediction machines that require human judgment to overcome. At the moment, only thoughtful humans can work out if the AI is falling into that trap.
While software has always been subject to security risks, with AI those risks emerge through the possibility of data manipulation. Three classes of data have an impact on prediction machines: input, training, and feedback. All three have potential security risks.
Prediction machines feed on input data. They combine this data with a model to generate a prediction. So, just like the old computer adage—“garbage in, garbage out”—prediction machines fail if they have poor data or a bad model. A hacker might cause a prediction machine to fail by feeding it garbage data or manipulating the prediction model. One type of failure is a crash. Crashes might seem bad, but at least you know when they have occurred. When someone manipulates a prediction machine, you may not know about it (at least not until too late).
Machines are generating predictions used for decision making. Companies deploy them in situations where they really matter: that is, where we expect them to have a real impact on decisions. Without such decision embeddedness, why go to the trouble of making a prediction in the first place? Sophisticated bad actors in this context would understand that by altering a prediction, they could adjust the decisions. For instance, a diabetic using an AI to optimize insulin intake could end up in serious jeopardy if the AI has incorrect data about that person and then offers predictions that suggest lowering insulin intake when it should be increased. If harming a person is someone’s objective, then this is one way to do it effectively.
We are most likely to deploy prediction machines in situations where prediction is hard. A bad actor might not find precisely what data is needed to manipulate a prediction. A machine may form a prediction based on a confluence of factors. A single lie in a web of truth is of little consequence. In many other situations, identifying some data that can be used to manipulate a prediction is straightforward. Examples might be location, date, and time of day. But identity is the most important. If a prediction is specific to a person, feeding the AI the wrong identity leads to bad consequences.
AI technologies will develop hand-in-hand with identity verification.
While personalized predictions might be vulnerable to the manipulation of the individual, impersonal predictions may face their own set of risks related to population-level manipulation
A seemingly easy solution to the problem of system-wide failure is to encourage diversity in the prediction machines you deploy. This will reduce the security risks, but at the cost of reduced performance. It might also increase the risk of incidental smaller failures due to a lack of standardization.
Many of the scenarios for system-wide failure involve an attack on several prediction machines at the same time. A way to secure against a massive simultaneous attack, even in the presence of standard homogenous prediction machines, is to untether the device from the cloud.
Another risk is that someone can interrogate your prediction machines. Your competitors may be able to reverse-engineer your algorithms, or at least have their own prediction machines use the output of your algorithms as training data.
The strategic issue is that when you have an AI, then if a competitor can observe data being entered and output being reported, then it has the raw materials to employ its own AI to engage in supervised learning and reconstruct the algorithm.
But more worrisome is that the expropriation of this knowledge may lead to situations where it is easier for bad actors to manipulate the prediction and the learning process. Once an attacker understands the machine, the machine becomes more vulnerable.
On the positive side, such attacks leave a trail. It is necessary to query the prediction machine many times to understand it. Unusual quantities of queries or an unusual diversity of queries should raise red flags. Once raised, then protecting the prediction machine becomes easier, although not easy. But at least you know that an attack is coming and what the attacker knows. Then you can protect the machine by either blocking the attacker or (if that is not possible) preparing a backup plan if something goes wrong.
Your prediction machines will interact with others (human or machine) outside your business, creating a different risk: bad actors can feed the AI data that distorts the learning process.
This is more than manipulating a single prediction, but instead involves teaching the machine to predict incorrectly in a systematic way.