سرفصل های مهم
فصل 6
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زیبوک»
فایل صوتی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی فصل
متن انگلیسی فصل
Chapter 6
Double-Edged Swords
Harness Platform Power, but Don’t Abuse It
Quotes from Mark Zuckerberg, cofounder and CEO of Facebook:
2009: “Move fast and break things.”
2018: “We did not take a broad enough view of our responsibility, and that was a big mistake.” Successful platforms inevitably acquire power. Platform power, like corporate power more generally, can take on different forms—economic but also social and political. How platform companies exercise that power reflects their position toward another crucial aspect of the business: platform governance.
In the early stages, it may be desirable for start-ups to “move fast and break things,” as Facebook’s Mark Zuckerberg declared in 2009. However, once they have achieved strong market positions, firms that abuse their economic power or fail in other governance areas can end up losing big. This holds true for platforms as well as other businesses, and reflects the other side of the coin that Zuckerberg acknowledged with his comment in 2018. In particular, powerful platforms may disenfranchise their ecosystem members, driving up resentment and fear from business partners (“complementors”) and customers alike. They may drive their beleaguered rivals into concerted action. They may also antagonize government regulators, who have demonstrated they will strike at platforms that do not keep their power or ambitions in check. To sustain the business long term, therefore, we argue that platform leaders must become aware of their potential for power and understand how to use it. The challenge is to compete aggressively while staying within the bounds of what most societies consider legal as well as fair and ethical.
The Mood Change
Until recently, the dominant mood in the business press (and in many business books on platform companies) was unbridled enthusiasm for the efficiency of platforms and awe at the speed at which they introduced both innovation and disruption. We and other authors have shown that many platforms are indeed amazing: They can reduce search and transaction costs, and fundamentally restructure entire industries within a few short years. We have seen this dynamic in computers, online marketplaces, taxis, hotels, financial services, and many other fields. Nonetheless, the tide of public perception seems to have turned: Media coverage of platforms has become increasingly negative. Calls to break up Alphabet-Google have appeared in major newspapers. The “Delete Facebook” movement was gaining traction among the public. Uber nearly collapsed from internal chaos, failure to properly vet drivers, misuse of digital technology (e.g., “Greyball” software that helped drivers evade law enforcement in markets where Uber was prohibited), and opposition from local governments and taxi industry representatives.
Why now? After three decades of explosive growth around the world, why have competitors, users, and regulators started to raise serious questions about the use and abuse of platform power? Part of the answer is scale: The biggest platforms—Apple, Amazon, Google, Microsoft, and Facebook—have become so large and valuable that they appear to be more influential and wealthier than many governments. As a group, these top platform firms have garnered so much power that one New York Times columnist labeled them “the Frightful Five.” These tech giants may have become too big to control. Google and Facebook dominate two-thirds of digital advertising. Apple has captured 90 percent of the world’s profits in smartphones. Amazon presides over more than 40 percent of e-commerce in the United States. Microsoft still owns 90 percent of the world’s PC operating systems. Intel still provides some 80 percent of the microprocessors for personal computers and more than 90 percent of the microprocessors for Internet servers. Facebook accounts for perhaps two-thirds of social media activity. The most powerful platform companies have started to look a bit like the big banks in the 2008–2009 financial crisis: too big to fail? Consider as well how platforms recently enabled the dissemination of fake news, Russian manipulation of social media, and electoral tampering, and clearly we have reached an inflection point. We now must view the most powerful platform companies as doubled-edged swords, capable of both good and evil.
Some threats reflect classical economic concerns, such as abuse of market power. In the non-digital world, governments traditionally addressed these problems via antitrust laws. Indeed, virtually every large platform company has faced American, European, or Chinese antitrust actions. Microsoft and Intel were targeted multiple times between 1994 and 2005; Google (and its holding-company parent since 2015, Alphabet) has been the big target in the last ten years, including high-profile cases brought by the European Commission against Google’s practices with vertical search and the Android mobile operating system. Apple was found guilty of price fixing and conspiring with electronic book publishers to raise consumer prices. More recently, a Yale law student published a widely quoted treatise on why antitrust law must change to address the threats from Amazon and other platform businesses even if their shares in particular markets remain well below the usual thresholds for antitrust action. Still, antitrust concerns and signs of market power abuse are not the full story. The 2018 scandal involving Facebook and Cambridge Analytica, for example, revealed that 87 million Facebook users had their personal data accessed without their explicit consent. Cambridge Analytica exploited weak Facebook privacy controls and turned a list of 300,000 people who had voluntarily answered a personality quiz on Facebook into a weapon for manipulating voter perception on a national scale. Ultimately, the Cambridge Analytica debacle raised broader questions about who is responsible and liable for activities on a platform. Are the participants on the different “market sides” responsible for their specific actions? Or must the owner of the platform take responsibility? For example, is Alibaba responsible for counterfeit products sold on its consumer platform, Taobao? Is YouTube responsible for pirated content uploaded to its platform? Who is responsible for violent or extreme user-generated content posted to any number of platforms around the world?
At one end of the spectrum, some people argue that platforms are not at fault. They view platforms as passive conduits; their only role is to serve as intermediaries to facilitate innovations or transactions, including information exchanges and content creation. The line of reasoning goes like this: Should a telephone company be held responsible for illicit conversations that happen on its telephone lines? Probably not. But then we get into a grayer area: Should a railroad company be held liable for thefts or terrorist attacks on its rail network? Maybe not, but maybe—if the company did not make what society deems to be reasonable efforts to protect its users.
Some companies like to use the “technically impossible defense.” They argue that it is impossible to monitor all the activities that might occur on their platforms. And clearly, with billions of users interacting every day, monitoring and controlling all platform activity is probably impossible, although it is becoming more feasible with advances in technology. Nonetheless, after a number of events involving Facebook and other platforms, broad swaths of society, as well as company executives and boards of directors, are now convinced that platforms do have a grave responsibility to police “bad actors” and illicit activities.
Not all the controversies over platforms have involved large and powerful American firms. In China, for example, there was a boom in small-scale lending platforms that arranged short-term loans between individuals and start-up companies or small businesses with weak credit ratings. These platforms claimed to be matchmakers and largely worked outside China’s financial system, promising huge returns of as much as 50 percent to an estimated 4 million small investors. State-owned banks facilitated the money transfers and made the investments seem safer than they actually were. The government started to crack down in 2016–2017, jailing for life the founder of Ezubao, a $9 billion lending platform. In July 2018 alone, China closed down 168 of these dubious lending platforms, according to one industry source. In retrospect, most of these platforms were little more than Ponzi schemes, operating under the cover of the “sharing economy.” The discussion of what governments should and should not permit is really about platform regulation. Some platforms have hidden behind their self-definition as “technology companies” to claim that sectoral regulation should not apply to them. Uber claimed it was not a transportation service and should not be regulated like a taxi company. Facebook claimed it was not a media company and that media regulation was irrelevant to its operations. Airbnb claimed it was not a hospitality or hotel company and merely connected renters and owners. The Chinese lending platforms presented themselves as matchmakers for peer-to-peer lenders and not as banks. These contested categorizations may seem like an arcane topic mainly of interest to regulators and corporate lawyers. However, they had real financial and logistical implications for the platform companies, individual users, and their investors.
In this chapter, we argue that managers and entrepreneurs should try to harness platform power but not abuse it. Platforms will try to exploit a dominant position, but they need to avoid or minimize the challenges posed by antitrust and broader societal concerns. We have organized our discussion around four guidelines: Don’t be a bully, balance openness with trust, respect labor laws, and curate (self-regulate).
Don’t Be a Bully: Anticipate Antitrust and Competition Concerns
Given the many ways platforms can abuse their power, perhaps the most expensive consequences come with violations of antitrust laws. In a winner-take-all-or-most world where network effects drive industry concentration around a small number of dominant players, platform companies have many opportunities to exercise market power, harm consumer welfare, hurt local or global competitors, and extract monopoly or quasi-monopoly rents. Antitrust cases are costly and lengthy affairs that usually take many years to resolve. At a minimum, they represent a serious distraction for senior management. When governments determine that a firm has violated antitrust rules, the remedies can be painful. They range from huge fines (up to 10 percent of global revenues in the European Union) and behavioral restrictions (limitations on specific actions that may be core to a firm’s competitive advantage) to structural solutions (such as breaking up a firm). Recall that the U.S. Justice Department in 2000 recommended breaking up Microsoft, but this remedy was overturned on appeal.
LESSONS FROM THE MICROSOFT ANTITRUST CASE
Microsoft’s numerous antitrust cases in the U.S. and in Europe have been discussed extensively elsewhere, but let’s review the key facts here to highlight the biggest risks and business lessons for current platform companies. The Microsoft antitrust saga began with a U.S. Federal Trade Commission investigation in 1990, followed by a consent decree signed in 1994 and a 1998 lawsuit from the U.S. Department of Justice and twenty U.S. states. One of the central issues hinged on whether Microsoft had used its monopoly in operating systems with Windows to force computer makers to exclude a browser made by Netscape. In 2000, a federal judge ruled that Microsoft had violated antitrust law, and ordered that the company be broken up, separating out the operating system business from the applications and Internet businesses. Microsoft appealed, which led to a new consent decree approved in 2002 that curbed some Microsoft practices. The U.S. consent decree officially expired in 2011. In parallel, the European Commission brought a different antitrust lawsuit against Microsoft for failing to provide interface information to allow rivals to connect into the Windows operating system. The European Commission accusations were later expanded in 2001 to include anticompetitive tying of the Windows Media Player with the Windows operating system. In 2004, the European Commission concluded that Microsoft had broken European Union law by leveraging its near monopoly in the market for PC operating systems onto markets for group-server operating systems and for digital media players. It ordered Microsoft to pay a €497 million fine ($620 million) and to make interface information available through “reasonable and non-discriminatory terms.” Other fines were added in 2006, 2008, and 2011, amounting to $2.1 billion for noncompliance. When the U.S. consent decree expired in 2011, Microsoft released this statement: “Our experience has changed us and shaped how we view our responsibility to the industry.” While the exact accusations differed across the lawsuits and between the U.S. and Europe, what they have in common is that regulators accused Microsoft (and found it guilty) of abusing the company’s dominance in PC operating systems—the most widely used innovation platform before the smartphone. The first set of exclusionary practices involved bullying computer makers (usually referred to as original equipment manufacturers or OEMs). For example, Microsoft threatened to cancel their Windows licenses in order to discourage them from loading rival browsers such as Netscape Navigator onto computers bundled with Microsoft Windows. As we wrote in Competing on Internet Time: Lessons from Netscape and Its Battle with Microsoft (1998), Microsoft frequently resorted to these types of tactics and, in our view, clearly “stepped over the line,” illegally using its monopoly power to reduce competition. The second set of exclusionary practices had to do with tying: Microsoft bundled complements such as the Media Player or the Internet Explorer browser with Windows at no extra cost to end users. By doing so, the platform owner effectively reduced or even eliminated the attractiveness to consumers of rival products.
A third set of practices deemed unlawful in Europe involved preventing third parties from having reasonable access to the platform in the form of interface information, making it impossible for them to become complementors. European antitrust authorities ultimately proclaimed that a platform company must provide access and create a “level-playing field” that was “reasonable and non-discriminatory” for third-party complementors. The Microsoft case highlights an important lesson for today’s platform companies: Very few of the actions that violated antitrust laws were necessary or critical to retaining a dominant market position. Platform businesses, once established, are difficult to dislodge. Microsoft most likely would have kept the vast majority of its share of PC operating systems and its dominant position in browsers and media players without illegally bullying PC manufacturers, competitors, or complementors. In many ways, Microsoft took unnecessary competitive shortcuts. Rather than relying on the merits of its products and technology, Microsoft tried to leverage its position with operating systems to put competitors at a disadvantage. Yet Microsoft had many natural advantages because of its huge market share with Windows and Office, deep pockets, and technical insights into how to best utilize its platform technologies. It was likely to win most battles without ending up in court. Nevertheless, one dominant platform firm after another has fallen into a similar trap of probably unnecessary paranoia followed by an abuse of power. Let’s briefly consider one other example: Google with Android.
GOOGLE AND ANDROID
Alphabet-Google replaced Microsoft as the primary focus of antitrust actions, particularly in the European Union. In fact, the EU had three cases against the company. The first, launched in 2010, considered Google’s behavior with its search engine. The EU accused Google of promoting its own “vertical search” results over general content search results. The second focused on how the company prevented websites that used its search bar and ads from showing competing ads. The third concentrated on Google’s management of Android. The Google Android example highlighted how a dominant innovation platform could be vulnerable to antitrust complaints even if it was free (unlike Microsoft Windows).
We discussed in earlier chapters how Google licensed the Android mobile operating system for no charge to manufacturers of smartphones and tablets but made money from selling advertisements that came through its search engine. There is nothing illegal about this multisided platform strategy. However, the European Commission alleged in 2016 that Google imposed conditions on mobile phone manufacturers and mobile phone operators aimed at protecting Google’s search engine monopoly. The Commission complaint stated that Google breached EU antitrust rules in three areas: (1) requiring manufacturers to preinstall Google Search and Google’s Chrome browser and requiring them to set Google Search as the default search service on their devices as a condition to license certain Google proprietary apps; (2) preventing manufacturers from selling smart mobile devices running on competing operating systems based on the Android open source code; and (3) giving financial incentives to manufacturers and mobile network operators on the condition that they exclusively preinstall Google Search on their devices. Commissioner Margrethe Vestager, in charge of competition policy, explained: “We believe that Google’s behavior denies consumers a wider choice of mobile apps and services and stands in the way of innovation by other players, in breach of EU antitrust rules.” In effect, the EU accused Google of the same kinds of tying, bullying, and exclusionary behavior of which Microsoft had been found guilty. Just as Microsoft was defending its 90-plus percent share of PC operating systems, Google was defending its approximately 90 percent share of global search and 80 percent share of smartphone operating systems. Although Android was ostensibly “free” and “open source,” the Commission’s investigation showed that it was commercially important for smartphone manufacturers using the Android operating system to preinstall the Play Store, Google’s app store for Android. In its contracts with manufacturers, Google also made licensing the Play Store on Android devices conditional on Google Search being preinstalled and set as the default search service, tying these products and services to the Android platform. As a result, rival search engines were unable to become the default search service for most smartphones and tablets sold in Europe.
Similarly, Google’s contracts with manufacturers required the preinstallation of its Chrome mobile browser in return for licensing the Play Store or Google Search. Google’s defense was that it wanted to reduce fragmentation and make it easier for developers to write new applications that would work on all Android phones and make it easier for consumers to have a consistent experience. But the Commission argued that browsers were an important entry point for search queries on mobile devices, and Google’s requirements reduced manufacturers’ incentives to preinstall competing browser apps and consumers’ incentives to download those apps.
The European Union fined Google’s parent company Alphabet $2.7 billion in 2017 and $5.1 billion in 2018 for anticompetitive behavior. As this book was being published, Google was appealing. We contend that, before things got to this stage, Google should have learned from the Microsoft case: It should have been more attuned to European antitrust rules and used less aggressive contracts. In the early days, when Android was just getting started, it made sense for Google to push the limits of its power. It was a new entrant into the smartphone platform business. Fragmentation, with multiple versions of Android, multiple browsers, and multiple app stores, unquestionably caused confusion among consumers and reduced incentives for application developers to absorb the expense of supporting incompatible Android versions. But, similar to the Microsoft case, once Android “won” the mobile OS wars, the vast majority of smartphone manufacturers were likely to bundle Google Chrome and Google Search anyway, regardless of the contract conditions. (Remember, Apple does not license its platform technology to anyone, for any price.) Google might have lost a few smartphone models to another app store or browser, but they would have been unlikely to stem the tide behind Android. Given its dominant position, and the consumer benefits of having access to the Play Store, Google Search, and Chrome, being a bully was unnecessary for Google by 2015.
Balance Openness with Trust: Privacy, Fairness, and Fraud
Antitrust is only one challenge for platform governance. Questions regarding responsibility and liability over activities conducted on a platform have become increasingly salient. All platforms require trust, which the dictionary defines as “reliance on the integrity, strength, ability, surety, etc., of a person or thing; confidence.” Since most modern platforms connect market actors that would otherwise struggle to interact, trust is essential.
To maintain trust, platforms need to prevent “bad actors” from contaminating the platform, doing damage to other platform users, or hurting the platform’s reputation in other ways. This proactive prevention process is often referred to as “curating” the platform. Curation can take several forms: restricting who can join; restricting what activities can happen; imposing transparency and authenticating members; providing controls to users such as who can contact them, who can see their content, how they can restrict access to information they provide (like Facebook or LinkedIn privacy settings); and monitoring activity on the platform (such as removing content deemed inappropriate or illegal).
But curation is no panacea, nor is it cost-free. Curation for a very large number of users and their content can be difficult to perform effectively and expensive to implement. Although artificial intelligence tools are making curation easier and cheaper, the technology today is not sufficiently advanced to replace human intervention. Companies need thousands or even tens of thousands of human curators to police a large platform. In addition, curating might be actually counterproductive: We discussed in Chapter 4 how eBay got rid of known counterfeiters on its China site and then lost 20 percent to 40 percent of its user base. Many consumers went to Alibaba because they wanted access to fake goods and bought them knowingly. Curation can even anger free speech advocates, who see taking down content as a step toward censorship. Finally, curated platforms—by definition—restrict the number of users, which in turn can reduce the strength of network effects.
FACEBOOK’S GOVERNANCE CHALLENGES
Facebook is the poster child for examining the challenges of maintaining trust. Beginning in early 2016, Facebook faced a series of intertwined controversies that called into question its identity as an open platform, an identity Zuckerberg and other managers had strongly defended. We have already given some details of the controversy. The issues are complex, but they boil down to two major questions of platform governance: What is Facebook’s responsibility to monitor and curate the content shared on its platform? And what steps should Facebook take to protect users’ privacy and ensure that third-party developers and advertisers are not misusing user data?
Facebook has become increasingly important as a platform not just for sharing individual information but also as the major way people discover and consume news. Accordingly, it has come under increasing criticism for the nature and quality of content posted on its site. It had always policed content to some degree, such as removing content defined as violent, sexually explicit, expressing hate speech, or harassment. In its early years, though, Facebook primarily depended on users to flag objectionable content, which the company would review and remove if it violated platform policies.
Despite increasing influence over the news business, Facebook continued to insist it was an open and neutral platform, not a publisher or a media company. In fact, in early 2016, when Facebook came under criticism for allegedly suppressing conservative viewpoints, CEO Mark Zuckerberg was asked if Facebook “would be an open platform for the sharing of all ideas or a curator of content.” He is reported to have replied firmly that “we are an open platform.” Events surrounding the 2016 election, however, called into question Facebook’s commitment to neutrality in its treatment of news content. During the course of the campaign, the volume of fake stories on the platform proliferated and users often shared those fake stories. Most damning were revelations that Russian actors had used Facebook to mount a propaganda campaign to influence the 2016 presidential election by creating fake accounts to post fake news stories. Although the number of fake accounts was small relative to the overall size of Facebook, the viral nature of posts magnified their influence. One study of five hundred posts by merely six fake accounts showed that users had shared them 340 million times. All of this had the effect of increasing polarization and division within the United States, for which Facebook came under harsh criticism from both the public and politicians. The stakes were even higher in most developing nations (except China), where Facebook’s mobile app had become the dominant way people consumed news. In recent years, false and misleading stories disseminated on Facebook have contributed to widespread ethnic and religious violence in countries such as Myanmar and Sri Lanka. As one observer put it, “The fact that Facebook is the Internet for many digital users, combined with low levels of digital literacy, makes fake news and online hate speech particularly dangerous in Myanmar.” Critics charged that malicious actors using Facebook in this way were simply taking advantage of Facebook’s platform model, which valued stories based on how often they were read, liked, and shared. It turned out that polarizing, simplistic, and false stories often generated more user engagement (and more advertising revenue) than sound news reporting! One Facebook executive acknowledged in late 2017, “If we just reward content based on raw clicks and engagement, we might actually see content that is increasingly sensationalistic, clickbait, polarizing, and divisive.” Ironically, that kind of content, while bad for the social fabric, was good for Facebook’s bottom line. As Wired noted, Facebook “sold ads against the stories and sensational garbage was good at pulling people into the platform.” The public backlash and scrutiny from politicians led Zuckerberg and Facebook to reexamine the way content on its platform is monitored and to do more to police content. In a significant step away from being a neutral platform, Zuckerberg announced in January 2018 that Facebook would alter its algorithm for selecting stories to show in the news feed. The new goal was to promote news sources that were “trustworthy” and “informative,” with the trustworthiness of news sources based on surveys of Facebook users. Facebook also announced it would ramp up efforts to catch fake news, foreign interference, and fake accounts through human monitoring, the use of third-party fact-checkers, and improved algorithms. Guy Rosen, Facebook’s vice president of product management, stated, “We are all responsible for making sure the same kind of attack on our democracy does not happen again.” Stepping up monitoring entailed compromises, however. As Zuckerberg noted, “When you think about issues like fake news or hate speech, right, it’s a trade-off between free speech and free expression and safety and having an informed community.” Until recently, social media platforms were adamant that they were not publishers but rather passive conduits of unedited and uncensored information that was generated by their users. In 2018, the big Internet platforms were not legally treated as publishers. This was due to the so-called safe harbor provision of the 1996 U.S. Communications Decency Act, a landmark in Internet regulation that stated that platforms were not responsible for what people published on their sites. But this law was originally intended to protect areas such as newspaper comment sections. The application of this law has become very broad, encompassing virtually all content on social media and sharing websites. One of the reasons why social media platforms do not want to be treated as publishers is because publishers are legally responsible for their editorial and publishing decisions. If a newspaper published a libelous or defamatory story, it could be sued. And if it infringed someone’s copyright, it could be held liable for damages. Another reason for platforms to resist curating or curtailing some content was simply a number’s game: A Facebook executive, for example, argued that all content was good for platforms, as more content fed more users and fueled network effects. It has even been suggested that the more outrageous or shocking the content, the more traffic it actually drives. But the backlash against the apparent neutrality, and what many have judged as callous behavior by Facebook and others, has made the “neutral” stance of social media platforms an increasingly untenable position.
In effect, by not only policing but selecting content, Facebook backed further away from its identity as a neutral platform and took a step closer to acting as a publisher. Facebook already employed some 15,000 content moderators in early 2018, and Zuckerberg promised the U.S. Congress that number would grow to 20,000 by the year’s end. The Cambridge Analytica scandal further exacerbated Facebook’s legitimacy and trust problems. Cambridge Analytica collected its data by 2014, when Facebook’s rules permitted apps to collect private information from users of the app as well as their Facebook friends. By 2015, Facebook had already changed its policy to remove the ability of third-party developers to collect detailed data about app users’ friends, but for many of its users it was still unclear the extent to which third parties used personal data.
To address the obvious loophole in its privacy policies, in April 2018, Facebook put additional limits on the information third-party apps could access. Imposing such limits, of course, generated more trade-offs. The ability of third-party apps to collect and analyze user data was critical to Facebook’s business model as an open platform. High growth rates depended on both app developers and advertisers getting deep insights from user data, which enabled increasingly effective targeted advertising. As one observer put it, “Third-party developers [have] built millions of apps on top of Facebook’s platform, giving Facebook users more reasons to spend time on the site and generating more ad revenue for the company. Restricting access to data would limit Facebook’s usefulness to developers and could drive them to build on a rival platform instead.” Zuckerberg acknowledged the tension in a 2018 interview: “I do think early on on the platform we had this very idealistic vision around how data portability would allow all these different new experiences, and I think the feedback that we’ve gotten from our community and from the world is that privacy and having the data locked down is more important to people than maybe making it easier to bring more data and have different kinds of experiences.” Similar to Facebook’s problems with fake news, what Cambridge Analytica did pushed Facebook to engage in more robust oversight to enforce its rules around data sharing. When first informed of the data harvesting activity, Facebook asked for and received a legal certification from the developer that the data had been destroyed. Facebook also received assurances from Cambridge Analytica that they had not received raw Facebook data. Those assertions turned out to be false, and Zuckerberg would later acknowledge that accepting those assertions was a mistake.
Facebook provided numerous lessons for other platforms. Beyond the big hit to Facebook’s stock price and market value, the potential loss of trust has provoked a more serious backlash and increased scrutiny. If movements such as deletefacebook picked up steam, the long-run implications could be even more damaging for the company. Facebook’s experience also foreshadowed the importance of preemptively curating, rather than waiting for the next crisis. At scale, powerful platforms cannot operate without enforcing rules of conduct on the platform and at least modest curation.
The trade-off between openness and curation creates a fundamental dilemma for platform governance: On the one hand, platforms should take some responsibility for how their platform is used. On the other hand, few people want platforms to become the new censors. For example, there was a raging debate in August 2018 when Facebook, Apple, Spotify, and YouTube moved to ban Alex Jones, the U.S. right-wing radio host and political commentator of InfoWars. In effect, digital platforms were trying to have it both ways: Take advantage of the fact they were not publishers to escape responsibility and, at the same time, increasingly acting like publishers in deciding which views and people were permitted on their platforms.
The Workforce: Not Everyone Should Be a Contractor
One of the most attractive features of platforms for financial investors is that they can be asset-light. Uber does not own taxis. Airbnb does not own apartments or houses. OpenTable does not own restaurants. Instead, most platforms connect people or companies with valuable assets and skills to other people and companies who want access to those assets and skills. While asset-light platforms potentially provide highly leveraged returns to investors, they create another challenge for human capital: How should platforms manage a workforce largely composed of “independent contractors”? Unlike employees, independent contractors are due no benefits, guarantee of hours, or minimum wage, enabling the enterprises that employ them to keep labor costs low. There were 57 million freelancers in the U.S. in 2017; for one-third of these people, freelance activity was their main source of income. Stephane Kasriel, CEO of Upwork, claimed that this class of workers is growing three times faster than the traditional workforce. One estimate suggests that, if the current trend were to continue, freelancers could represent 50 percent of all U.S. laborers by 2027. Platforms such as Uber, Grubhub, TaskRabbit, Upwork, Handy, and Deliveroo classify much of their workforce as independent contractors. The companies justify this practice because the workers tend to perform their jobs as a side activity, with significant flexibility in their hours. In reality, this classification is mostly about saving costs: Industry executives have estimated that classifying workers as employees tends to cost 20 to 30 percent more than classifying them as contractors. The classification is therefore critical because many transaction platform start-ups rely on it to avoid high labor costs. Some even argue that the whole “gig economy” would collapse if start-ups were obliged by law to classify all their associated workers as employees. But this widespread practice is becoming increasingly controversial. In the United States, the situation is particularly complex because laws that determine independent contractor and employee status vary from state to state, and even city by city, although many regulations focus on how much control workers have over their work.
The highest-profile debates over contractors involved Uber and ride-sharing platforms. Arguments could be made on both sides whether Uber drivers were employees or contractors. On the contractor side, drivers supply the tools for their work (the cars), are paid by the job, and control their work hours, geographic area for pickups, and whether or not to accept a passenger’s request for a ride. On the other hand, Uber sets the passenger pay rate, the method of pay, and which passengers the drivers must pick up, and the company immediately removes drivers who fall below a 4.6 rating from the app. By contrast, Uber’s rival Lyft agreed in January 2016 to pay $12.3 million as part of a labor lawsuit settlement. Lyft also agreed to change the terms of its service agreement with drivers so that it could only deactivate them for specific reasons, such as low ratings from passengers, and would give them an opportunity to address feedback before being deactivated. It also agreed to pay arbitration costs for drivers who wanted to challenge being deactivated or made other compensation complaints.
If an employer is mainly focused on the outcome of the work being performed, there is a good chance the workers are fairly being classified as independent contractors. But when their employer begins to control not only what work they do but how they do it, that classification gets murky, as several examples demonstrate.
HANDY
Handy was a start-up platform hatched out of Harvard University in 2012. It originally connected cleaners to people who wanted their homes cleaned and then expanded to other “handyman” services. As of 2018, the company operated in the U.S., Canada, and the U.K., and reportedly had raised over $110 million in venture capital. Handy offered an easy way to arrange help for cleaning and other tasks, with the person you hired vetted through customer ratings and a background check. As a transaction platform, it provided value to both service providers (workers) and users. Workers who used Handy had an easier time attracting new customers and ensuring they were adequately paid; with Handy, they made $15 to $22 an hour, based on their online rating. Users looking for help found it easier to identify people and benefited from them being vetted and knowing their review scores.
As with other platforms offering these types of services, Handy classified its unskilled workers as contractors, not employees. Handy CEO Oisin Hanrahan justified the classification by claiming the biggest benefit of the contractor system is that cleaners, who Handy dubbed “pros,” were free to keep their own schedules. “Our pros value flexibility and getting to say where and when they want to work. Fifty percent work less than 10 hours a week and eighty percent work less than 20. Typically, they’re folks who work another job, are in school, or caring for parents,” Hanrahan said. He added that it was unreasonable to expect people who worked intermittently to be covered by the same system as someone who worked a forty-hour week with benefits. But opposition continued to grow toward the classification of such workers as contractors. California lawyer Shannon Liss-Riordan became notorious for leading worker class-action suits against transaction platform companies, having spearheaded lawsuits against Uber, Lyft, and nine other firms that provided on-demand services. In an interview with Fortune magazine, she scoffed at the comments made by Handy’s CEO: “Those arguments ring hollow. The cleaners are not getting wage protection, and they’re not getting workers’ compensation or unemployment insurance,” said Liss-Riordan. “Handy is not just Craigslist. It’s an employer controlling a workforce.” She added that Handy’s use of training guides and the company’s control over work rules and prices, as well as asking Handy workers to wear a Handy logo, proved that Handy cleaners were employees. In August 2017, the National Labor Relations Board issued a complaint against Handy, alleging that workers who provided its home cleaning services were employees, despite the company’s claims to the contrary. It alleged that Handy “has misclassified its cleaners as ‘independent contractors,’ while they were in fact statutory employees” entitled to the protections of federal labor law. The company Homejoy, another on-demand cleaning services company, was also sued in a class action over worker classification in 2015 and had to shut down. It is not yet clear how Handy will fare in the future or whether senior management will change its position.
DELIVEROO
The contractor versus employee classification dilemma is not just an American problem. We have seen similar complaints arise for Deliveroo in the United Kingdom, where its couriers are a familiar sight on London streets. Riding their bicycles, they delivered restaurant meals ordered on a mobile app. As self-employed contractors, Deliveroo couriers were not entitled to the rights available to regular workers, including sick pay and the national living wage. The Independent Workers Union of Great Britain (IWGB) brought a test case in 2017 to fight for the right of union recognition at Deliveroo in Camden and Kentish Town, London. The hearing took place in front of the U.K. Central Arbitration Committee (CAC), an independent body that adjudicates on statutory recognition and de-recognition of trade unions in relation to collective bargaining. The CAC agreed with Deliveroo’s argument that its riders were self-employed contractors, rather than workers.
The CAC decision rested on a specific practice called “unfettered right of substitutions,” accepted by Deliveroo for their riders in Camden and Kentish Town. Deliveroo riders could nominate any individuals to perform deliveries in their place. This right was available without Deliveroo’s prior approval, save only that substitutes could not have had their own supplier agreements with Deliveroo terminated, or could not have engaged in conduct that would have provided grounds for termination. There were no adverse consequences if a rider nominated a substitute. Equally, riders did not have to accept a certain proportion of jobs and were not penalized for turning down work. The ability of couriers to substitute or obtain a replacement for the job was central to the CAC’s finding that they were not “workers.” Since the change in contracts happened only a couple of weeks before the adjudication, representatives of the IWGB claimed that Deliveroo “gamed the system.” The CAC decision impacted only a small geographic area in North London. In other places, Deliveroo had various forms of contracts with riders. The IWGB applied in 2018 to have the CAC decision reversed in a judicial review.
THE BIGGER PICTURE: A CHANGING LEGAL REGIME
The problem of how to classify workers is not specifically a platform problem. FedEx, for example, had long tried to classify its drivers as independent contractors. It faced challenges as well, including two class-action lawsuits brought by 2,000 drivers in California and another by over 12,000 in Indiana and eighteen other U.S. states. In these cases, drivers claimed they were undercompensated compared to full-time workers. FedEx settled the first case in June 2015 for $227 million and settled the second case in 2017 for another $227 million. Workforce regulation will proceed amid a changing legal environment. It is going to become increasingly difficult for platforms or other firms to classify workers as independent contractors. In an April 2018 landmark ruling, the California Supreme Court narrowed significantly the circumstances under which California businesses may classify workers as independent contractors rather than employees. The decision presumes that all workers are employees, sets out a new three-part “ABC” test that businesses must satisfy in order to classify workers as independent contractors, and places the burden on the business, not the worker, to prove that any particular worker is properly classified as an independent contractor. Under the ABC test, the business bears the burden of proving the worker satisfies all three of the following factors: (A) The worker is free from control and direction of the hiring entity in connection with the performance of the work, both under the contract for performance of the work and in fact.
(B) The worker performs work that is outside the course of the hiring entity’s business.
(C) The worker is customarily engaged in an independently established trade, occupation, or business.
This development marks a substantive change from previous regulation. For decades, the common law test in California of whether workers were employees or independent contractors involved the employer’s “right to control” the manner and means by which workers performed their duties. In the new California ruling, a business’s failure to prove any one part of the ABC test will result in the workers being classified as employees under the applicable California wage order. By shifting the burden to the business, the California Supreme Court created a presumption that workers are employees. Furthermore, the circumstances of the working relationship will decide the question; businesses may not avoid the ABC test by way of a contract in which the parties agree the workers are independent contractors.
This legal ruling has immediate ramifications for businesses throughout California, where many platforms operate. It is also likely to influence practices outside California. Shortly after the California Supreme Court ruling, San Francisco city attorney Dennis Herrera announced he was subpoenaing Lyft and Uber to see how they classified their drivers and to obtain data on pay and benefits. If those drivers should in fact be considered employees, the ride-for-hire firms would owe them minimum wages along with sick days, paid parental leave, and health benefits, according to Herrera. This would have drastic financial consequences for Uber and Lyft in California.
These issues were also part of a national debate in the United States about workers in the gig economy more broadly. We cannot predict with any certainty how this debate will end. Nonetheless, we believe that the days of large platforms treating everyone as a contractor are probably over, at least in the United States. Legal and reputational risks are rising dramatically for firms like Uber. For workers who perform tasks central to a business, the bigger, successful platforms will need to offer some benefits that are comparable to what regular employees receive. Start-ups can get away with using mostly contractors at first because they are still small and under the radar of regulators and competitors. As the companies grow in size, however, they need to grow up in terms of policies. Once platform companies get beyond the start-up phase, expectations of compliance and fairness, from workers as well as customers and regulators, will change. Platforms that get into the most legal trouble are likely to be those that do not recognize when they pass that threshold between start-up mode and established company.
As platforms move from cleaners and taxi drivers to highly paid white-collar contractors, workers in the gig economy will become more highly educated, with greater bargaining power. In order to be sustainable enterprises and to be accepted as beneficial contributors to society, platform companies need to adopt the same values as the societies in which they function. Given the growing sensitivity to issues of fairness, powerful platforms risk destroying their reputations. How platforms treat the people who contribute to their success will become an increasingly important part of building and maintaining their reputations. Reputations, in turn, will impact how well platforms compete long term with each other as well as with traditional businesses.
Self-Regulate: Work with Regulators Before They Pounce
Platform start-ups often break rules. By finding new ways to do things, they often have skirted sectoral regulations and traditional tax collection. Some of the strongest critics, such as our colleague Ben Edelman, argue that platforms such as Uber have deliberately flouted regulations and that this systematic law evading is at the core of their business models. Uber and Airbnb offer equivalent services to taxi and hotel companies, but by characterizing themselves as technology companies that are mainly “app providers,” they escape regulations for safety, insurance, hygiene, and other regulatory requirements that apply to taxis and hotels.
On one level, the question is simple: Is Uber a transportation company? Is Facebook a media company? Is Airbnb a hotel company? Is Amazon a local retailer (subject to sales tax) and not an online catalogue company? If the answer is yes, then shouldn’t these platform businesses be regulated like other firms in their sectors? At the heart of the issue is whether we can or should categorize a platform business as the same type of company that it competes against in the traditional economy. The answer has serious implications for operating costs and liability.
A number of national and international institutions (the European Commission, the OECD, the French Conseil National du Numérique, the German Competition Authority, the U.K. House of Lords, and others) have engaged in vigorous debates as to whether platforms ought to be regulated with a new line of bespoke platform regulations. These debates are likely to continue for several years, although some countries have already changed their laws. The French Parliament, for example, adopted a law on “platform fairness” (loyauté des plateformes) in October 2016. The broad direction for platform regulation in Europe can probably be predicted in the 2016 statement by the European Commission. This offered four fundamental principles to foster a “trusting, lawful, and innovation-driven ecosystem around online platforms in the EU”: (1) a level playing field for comparable digital services, (2) ensuring that online platforms behave responsibly to protect core values, (3) fostering trust, transparency, and ensuring fairness, and (4) keeping markets open and nondiscriminatory to foster a data-driven economy. Managers and entrepreneurs at platform companies need to get ahead of this curve. Self-regulation tends to be less costly for firms than government-imposed regulation. Competitive advantage early in the life of a platform may come from exploiting regulation loopholes (e.g., Uber with drivers or Amazon not paying state and local taxes). But as platforms become more powerful, preemptive self-regulation is usually the better strategy, as Amazon did when it voluntarily decided to collect state and local sales taxes before being required to do so.
AMAZON SELF-REGULATES: SALES TAXES IN THE UNITED STATES
Amazon has drawn considerable attention because of its aggressive expansion strategy, leveraging its strong position as an online retailer, initially with books, to move into many other retail markets as well as related services. It also holds a dominant position in cloud computing. Yet most of Amazon’s market shares are well below what we would normally consider monopolistic positions. For example, in 2018 it accounted for 43 percent of online retail but only 4 percent of total retail in the United States. No government regulator to our knowledge has suggested that Amazon violated existing antitrust law. Nonetheless, some people have argued strenuously that we need new antitrust regulations to curb how Amazon and other platforms exploit their positions in one market to enter into others, such as by using customer data or platform transaction information to gain an advantage in pricing or market entry not available to competitors. The issue is particularly complex because platforms such as Amazon generally bring lower prices to consumers, at least in the short term. In the longer term, however, driving competitors out of business ends up restricting consumer choice, which tends to lead to higher prices. At least, that is the theoretical argument against the kind of customer tying (for example, marketing different products and services to Amazon Prime members) and vertical integration (such as using information gained on third-party sales through the Amazon Marketplace transaction platform to enter those product segments directly) that Amazon has pursued. Historically, Amazon took advantage of a 1992 U.S. Supreme Court ruling that a U.S. state can require retailers to collect a sales tax only if they have a physical presence in that state. This was originally designed to protect catalogue companies such as L.L.Bean, which shipped products nationwide from one or two locations. In its early days, Amazon exploited that law by keeping warehouses out of populous states like California. The research director of the Institute on Taxation and Economic Policy, Carl Davis, claimed that “there is no doubt that Amazon used its ability to not collect sales tax to gain a competitive advantage.” As recently as 2012, Amazon was collecting sales taxes in only five states and had “cut ties with in-state businesses to avoid collecting sales tax” in several states between 2009 and 2014.
But as the company grew and focused more on reducing delivery times, Amazon reached deals with many states to set up warehouses inside their borders. As part of those agreements, Amazon typically began collecting sales taxes within a few years. The company started to collect sales taxes on its own goods in 2012 in California, where it built warehouses, as well as in Texas, Pennsylvania, and a few other states. This activity steadily increased over the years. As of mid-2018, Amazon was collecting sales taxes in all forty-five U.S. states that have a sales tax. Joseph Bishop-Henchman, the executive vice president of the free-market‒oriented Tax Foundation, said: “Other large e-retailers, most notably eBay, generally do not collect sales tax still.” Amazon, on the other hand, “ultimately changed their position.” There remains, however, an area of ongoing contention around sales tax at Amazon. On Amazon Marketplace, which accounts for more than half of all unit volume transactions performed on Amazon, sellers list their products for sale on the marketplace and determine their own prices. Many sellers take advantage of an additional program, called Fulfillment by Amazon, through which Amazon stores and ships their inventory. The sellers pay Amazon fees for those services, but the e-commerce giant leaves it up to them to collect sales tax where they are required to do so.
We applaud the decision to self-regulate on state and local taxes. Amazon was becoming a powerhouse in American e-commerce. Jeff Bezos and other managers must have understood that, like Walmart before it, powerful retailers are frequent targets of attack by local communities afraid of the loss of jobs and competition with local vendors. By taking sales taxes off the table, Amazon eliminated a potentially serious source of friction. At the same time, Amazon’s competitive advantage no longer depended on offering lower prices by avoiding sales taxes. Recent research has suggested that paying state and local taxes was not even one of the top ten considerations for not buying from Amazon.
YOUTUBE: HOW GOOGLE AVOIDED REGULATORY INTERVENTION
The early days of YouTube under Google provide another example of the “Wild West” of platform media: Virtually all content went unsupervised from 2006 onwards. But in 2017 and 2018, YouTube faced heightened scrutiny in the wake of reports that it was allowing violent content to seep through past the YouTube Kids filter, which was supposed to block any content inappropriate for young users. Some parents discovered that YouTube Kids was allowing children to see videos with familiar characters in violent or lewd scenarios, along with nursery rhymes mixed with disturbing imagery. Other reports uncovered “verified” channels featuring child exploitation videos, including viral footage of screaming children being mock-tortured and webcams of young girls in revealing clothing. YouTube also repeatedly sparked outrage for its role in perpetuating misinformation and harassing videos in the wake of mass shootings and other national tragedies. Survivors and the relatives of victims of numerous shootings were reportedly subjected to online abuse and threats, often tied to popular conspiracy theory ideas featured prominently on YouTube. Parents of people killed in high-profile shootings tried to report abusive videos about their deceased children and repeatedly called on Google to hire more moderators and to better enforce its policies. In response to this increasing negative press and public sentiment, YouTube CEO Susan Wojcicki announced in December 2017 that Google was going to hire thousands of new “moderators,” expanding its total workforce to more than 10,000 people responsible for reviewing content that could violate its policies. In addition, YouTube announced it would continue to develop advanced machine learning technology to automatically flag problematic content for removal. The company said its new efforts to protect children from dangerous and abusive content and to block hate speech on the site would be modeled after ongoing work to fight violent extremist content. The goal of machine learning technology was to help human moderators find and shut down hundreds of accounts and hundreds of thousands of comments.
This application of technology seemed to be working. YouTube claimed that machine learning helped its human moderators remove nearly five times as many videos as they did previously, and that 98 percent of videos removed for violent extremism were now flagged by algorithms. Wojcicki claimed that advances in the technology allowed the site to take down nearly 70 percent of violent extremist content within eight hours of it being uploaded. “Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” Wojcicki wrote in a blog post. The various sides of the YouTube platform were affected by problematic content. Some advertisers pulled their ads because they were being placed alongside inappropriate videos with hate speech and extremist content. Then some high-profile brands suspended YouTube and Google advertising after reports revealed they were placed alongside videos filled with sexually explicit or exploitative content about children. YouTube announced in December 2017 that it was reforming its advertising policies, saying it would apply stricter criteria and conduct more manual curation as well as expand its team of ad reviewers.
In January 2018, YouTube announced that videos from its most popular channels would be subject to human review, preemptively checking large amounts of content to ensure it met “ad-friendly guidelines.” By doing so, YouTube raised the bar for video creators who wished to run ads on their content while hoping to allay advertiser unease about the video-sharing website. Advertisers were now able to choose ads on channels verified as “Google Preferred,” which would be manually reviewed, and decide which ads would only run on verified videos. YouTube announced it would complete manual reviews of Google Preferred channels and videos by March 2018 in the U.S. and all other markets where it offered Google Preferred.
Facebook, Google, and other platforms could have avoided some current difficulties if they had pursued self-regulation measures earlier. Although they avoided punishing regulations in the United States, European governments have acted more aggressively. In May 2017, the European Council approved proposals that would require Facebook, Google (YouTube), Twitter, and other platforms to block videos containing hate speech and incitements to terrorism. The regulations, which still need to be passed by the European Parliament before becoming law, would be the first EU-wide laws holding social media companies accountable for hate speech published on their platforms.
The good news for Google and other platforms is that self-regulation appears to be working, especially when the European Commission backed away from the plan to propose binding EU legislation that would force online platforms to remove posts containing hate speech. In a press conference in January 2018, EU justice commissioner Věra Jourová said she did not plan to regulate tech firms over hate speech. Instead, she wanted to continue relying on a nonbinding agreement that she brokered in 2016 with Twitter, YouTube, Facebook, and Microsoft, which she said was now working. “Each of the four IT companies has shown more responsibility,” Jourová told a news conference. “It is time to balance the power and responsibility of platforms and social media giants. This is what European citizens rightly expect,” she added. Jourová praised Facebook’s announcement in 2017 that it would hire 3,000 people to monitor its users’ posts for hate speech. Facebook also said it planned to add five hundred staff members in Germany to review complaints about hate speech. According to the Commission’s newest figures in January 2018, Twitter, YouTube, Facebook, and Microsoft reviewed about 82 percent of complaints about hate speech within twenty-four hours. This was a significant change from May 2017, when the firms reviewed only 39 percent. By January 2018, Facebook removed 79 percent of posts containing hate speech across the EU. YouTube took down 75 percent, and Twitter removed 45.7 percent. Key Takeaways for Managers and Entrepreneurs
In this chapter, we discussed how digital platforms have morphed from beloved mavericks to feared tech giants. We showed how the mood about platforms has changed, especially with regulatory scrutiny on the rise. The most important minefields platforms face are running afoul of antitrust, pursuing growth and network effects at the expense of maintaining trust, and seeking labor cost reductions to the extent of possibly breaking labor laws and destroying workforce relationships. If we accept that managing platforms has become a “double-edged sword,” capable of both good and evil, what are the key takeaways for managers and entrepreneurs?
The first and most important lesson from this chapter is that managers need to find the right balance between pursuing growth without abusing their market power. In the early days of Internet retail, social media, ride sharing, room sharing, and other gig-economy ventures, there were many areas of legal and regulatory ambiguity. The platforms that exploited these ambiguities gained an advantage over platform competitors as well as firms in the traditional economy. When the rules are black-and-white, platforms must take care not to cross the line into illegal behavior, whether it is antitrust, labor laws, tax issues, or industry regulation. But when the rules contain gray areas, platforms are likely to test the limits of the law and of social mores, such as classifying most workers as contractors rather than employees. It is fair to say that Uber and Airbnb probably would never have gotten off the ground if they had followed the letter and the spirit of the law. Nonetheless, two points need repeating: (1) The mood change means that platform behaviors which were tolerated in the past will be less tolerated in the future; and (2) as platforms grow in scale and power, they will come under closer scrutiny and must obey a different set of rules or adhere to existing rules more closely.
Second, emerging platforms can learn from the experiences of Microsoft and Google how to mitigate antitrust concerns. The natural tendency for many platforms has been to wait until the regulator acts, or to wait for a backlash from users and partners. Propelled by their entrepreneurial energy, and possibly cognitively constrained by it, sometimes it is difficult for founders to acknowledge their own “power.” We say: Do not wait! To be more proactive, platforms should build internal capabilities (such as specialized teams) that keep abreast of regulations in different countries (and sometimes in different states) and learn what activities to avoid. Then they must educate managers, employees, contractors, and other business partners on what not to do. We know this is possible: After seeing how antitrust severely disrupted AT&T’s business during the 1980s, Intel CEO Andy Grove introduced strict internal procedures to minimize Intel’s exposure to antitrust scrutiny. For close to twenty years, Intel under Grove largely avoided serious antitrust problems, despite its dominant market share in microprocessors.
Third, we believe that platforms should preemptively self-regulate and reduce the likelihood that governments will intervene and alter the playing field in ways that are not good for them, ecosystem partners, or consumers. As a start, platforms will have to invest much more in curation to provide the right balance between openness and trust. This will add costs, even as we make advances in artificial intelligence, machine learning, and other forms of algorithm-based surveillance. Similarly, platforms will need to evolve their workforce rules and benefits, and to adopt flexible work rules that will be consistent with local regulations, which will have to accommodate both full-time employees and independent contractors. We hope many countries will also adapt their labor regulations to accommodate the digital economy and more part-time “gig” workers. In any case, platform companies, like firms in the traditional economy, have to learn to value the benefits of a stable and capable workforce and figure out how to incorporate better working conditions into their strategies and business models. Contractors who work full-time for a platform should be full-time employees, regardless of the local, state, or national rules. The platform companies that only derive a competitive advantage because of flouting regulation or exploiting workers to the point that they cannot make a decent wage should not be allowed (either by the market or by regulators) to persist. We believe that a combination of market backlash and government regulation will make sure that they won’t succeed in the long run.
Aggressive platform businesses will need to adapt to the current environment with greater self-regulation and curation. Curation is a tricky issue and is potentially counterintuitive, since the power of platform businesses relies on network effect–fueled growth. The logic of network effects is that platforms should naturally lean toward open membership, not curtailing members’ behaviors, and not excluding members. In some cases, the paradox, as we have seen with Facebook, is that the most outrageous content often goes viral and attracts more users and more advertisers. But entrepreneurs, managers, and boards of directors need to be mindful of the potential abuse of platform power sooner rather than later, and before regulators pounce. We contend that the Wild West days of platform businesses in Western countries are coming to an end. (China is already an exception—it closely regulates digital platforms.) The global reach and power of platforms has become increasingly obvious. Consequently, we expect that the way platforms deliberately and strategically curate their content and memberships will define what they really stand for. How platforms govern their ecosystems will express the values of their leaders and entire organizations. Governance policies, in turn, will become an intrinsic part of their value propositions and either attract or repel users and ecosystem members.
What we expect from platform businesses in the future, at least in terms of new technologies and market opportunities, is the subject of the final chapter.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.