Why Elon Musk Bought Twitter
And What Musk’s Twitter Takeover Means for Consumers and Digital Businesses
Add bookmark
He really did it. Elon Musk, one of the most idolized and despised public figures on earth, bought Twitter. For $44 billion. Enough to end world hunger nearly seven times over. Some Twitter users (and former Twitter users) are thrilled; others are leaving in droves. My friends are telling me to join Mastodon or CounterSocial. I’m not sure what to do. Some investors are excited (Twitter shares rose 5.9% the day of the announcement), while others are devastated (Tesla shares dropped more than 12%, losing $125 billion in market value). Many online advertisers, digital businesses and ad agencies are considering abandoning the platform altogether, while others may see the potential for new, creative advertising opportunities. Musk is, of course, the same billionaire who’s launched numerous, paradigm-pushing tech endeavors designed to advance society. But if you’re like most consumers, you’re probably at least a little concerned; a Preply.com survey of 2,000 Americans found that two in three did not want Musk to take control of Twitter.
In a statement April 25, 2022, Elon Musk, a self-proclaimed “free speech absolutist” (without the track record to support it), said “Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated.” But this is the same Tesla CEO who fired employees for telling the truth about his products, and the same heir to an Apartheid-era emerald mining empire who tweeted at a random Twitter user that he and his friends in politics would “coup whoever we want."
Democracy Now! said Musk has been “an abusive bully on Twitter for years.” And according to Digital Rights Watch, “Musk’s style of free speech absolutism will tilt the scales in favor of the rich and powerful who can silence or bully critics. What Musk really seems to want is freedom from accountability.”
Shahid Buttar, an organizer-slash-politician running against Nancy Pelosi, agrees. He told me “Musk’s Twitter takeover reflects a key component of emergent fascism in the United States: the extension of state power by constitutionally unaccountable corporations.” Pretty daunting.
Across the internet, though, the consensus appears mainly to be uncertainty and a little confusion. From the leaked audio of a post-takeover “All Hands” meeting, it’s clear Twitter employees weren’t spared; many expressed concerns about the effect Musk will have on their careers, the health of the platform, and the safety and security of Twitter users.
Of course, as some have pointed out, Twitter’s founder and former CEO, Jack Dorsey, is a ‘crypto tech bro,’ too — so it is possible nothing will change at all.
Until further notice, these are the most significant questions facing businesses/advertisers that use Twitter:
- Will my audience(s) remain on the platform?
- Will platform policy changes impact the effectiveness of my digital marketing and/or digital advertising efforts?
- Will we still want to remain on the platform, given our core values?
For answers to these and other pressing questions, I spoke with David Greene, Senior Staff Attorney and Civil Liberties Director at the Electronic Frontier Foundation (EFF). (Sorry, next time I’ll show my face.)
What Elon Musk’s Purchase of Twitter Means for Digital Businesses, Online Advertisers, Consumers, and Our Civil Rights: A Conversation with EFF
"There's a potential that nothing could happen but also potential for great changes that would completely change the way people engage with the platform. I think for businesses, again, advertising might completely change. Also, the people who are on it, the audience that's on it might completely change." - David Greene
The Video
The Transcript
Philip Mandelbaum (PM): Hey, this is Philip Mandelbaum. I’m interviewing someone pretty special today with Customer Engagement Insider. His name is David Green, and he's the director of civil liberties for the Electronic Frontier Foundation, commonly known as EFF. And I asked David to join me today because something pretty wild happened yesterday that everyone on Twitter seems to be freaking out about. And I would like to maybe add some knowledge to the discussion, some experience to the discussion, and kind of find out if we should be freaking out or if we shouldn't. How is Elon Musk buying Twitter going to impact the general population? Going to impact civil liberties? And how is it going to impact someone who uses Twitter to promote products, or a brand, or a personality? Welcome. Thank you.
David Greene (DG): Happy to help try and sort these things out.
PM: Great. So let's start with civil liberties. We have no business if we aren't able to express ourselves freely and without fear, I guess. Right? So even businesses are impacted by that. What have EFF’s fears been even leading up to this takeover, so to speak? And how, maybe, have they changed, if at all, since Elon Musk taking over?
DG: Yeah, so let me talk not so much just around Twitter, in particular, but what our policy positions are and how we work in this space. It's sort of generally called content moderation. How these social media platforms and other online services treat user speech as it passes through them, and how they engage with it, right? Because they almost all do, they all do something. They all either decide to highlight some things, or they have tools that identify what they think their users want to see. That's why most people sign up for these services, because they offer these types of features that help them find stuff they think they want…
PM: Of course they’re not always right.
DG: And a lot of times we laugh at how wrong they are. Right? That, you know, “I didn't want to join that group, what were they thinking? That's ridiculous. Oh, their algorithm’s awful.” But there are human rights concerns to this, obviously, because these things are global products; even if they really didn't want to be, they largely are. And there are human rights concerns, because in some places around the world, and not even just in non-democratic societies but both in democratic and non democratic societies, some of these platforms run by private people can really be either the most effective place for someone to speak or sometimes the only place they can really speak. The ability to speak pseudonymously, which many, although not all of the services offer, is really crucial. It’s really crucial for many people who don't have the privilege or luxury to be able to put their real name with their words. That's really critically important for political dissidents in, again, non-democratic societies — even people who just within their own small community cannot be open about their views. And so when a platform chooses to engage with user speech, and maybe does it in a way that makes them less visible or removes it or doesn't allow them to use the service, that can have human rights concerns.
At the same time, there can also be human rights concerns with the complete free-for-all use of these services because, as we've seen — and again, this is not unique to social media at all — can be used for personally directed abuse, for harassment, to undermine democratic institutions. And so again, we get human rights concerns just floating around this issue of content moderation. So EFF 's position has been to realize that this content moderation, first of all, everyone does it — everybody moderates, even the services that say we are completely pro-free speech, they all moderate… And pretty much everyone always has. I think sometimes you hear these stories about this golden age, and that really has never been the case. I think to the extent someone had the experience where they thought speech wasn't moderated, it really was just because they were fortunate enough to not be speaking as an unpopular speaker. Especially if you talk to people who are sexual freedom advocates, their work, their legal speech has always been moderated from the internet.
Anyway, this is a long way of saying that EFF’s position with respect to content moderation is to understand that everyone's going to do it, not to tell them not to do it. In fact, in some cases, we think it's appropriate. I think it's completely understandable that users may want a forum that is either moderated by subject matter — like, “I just want to have a social media service that's devoted, you know, to kayaking, and I want to be able to remove speech that doesn't pertain to kayaking;” or, even, “I want to have a social media service entity where I can just talk to people who share my political views, and so I'm okay if they kick out the people who don't share my political views.” I don't want that to be the only thing that exists on the internet, but I'm okay with that system.
And then also really to protect users who feel like they're being subjected to abuse, harassment and silence in that way, or they feel like they can't use the service because they get attacked on it. So what we have done is to really urge companies, first of all, to recognize, at least certainly under US law, they have a legal right to moderate, the legal right to edit and curate their sites — that's pretty well established under the First Amendment — [and] do so within a way that's consistent with human rights principles.
And so we're one of the organizations that has drafted and endorsed the Santa Clara Principles, which is now in its second version, which is just sort of a framework for how you think about content moderation in a human rights-respective way that really asks and urges sites to have clear rules, and understandable rules, and to enforce them consistently; to notify users if there's a negative treatment on either their accounts or their speech; to give them an opportunity to appeal; to be culturally competent in the decisions they make, to have the language competence; and to be able to reach out to experts when they don't have in-house expertise for this stuff. So that's the approach that we've taken.
PM: I think that's really critically important. Someone like, you know, an average user isn't going to be able to tell if they're doing it that way. Right? Because there have been times when I've seen really rabid hate speech, and then I go to the account and they have like one follower, but every single tweet is horrific — things I wouldn't even repeat. I report them. Twitter says “they haven't broken our policies.” Okay. Any thoughts on that? Just curious.
DG: I think everybody has an experience where they find that they are bewildered by why a platform, Twitter or Facebook or Instagram or Pinterest or whatever, made a decision? Like they think that it wasn't consistent with what their community standards are… And I don't apologize for the services doing this. But these mistakes, they're really inevitable, and it's impossible to eliminate them.
PM: Is part of the problem because it’s not human beings doing it? Or is it human beings doing it?
DG: The main problem is just the scale. Even the smallest sites, the scale of decision making is massive. And because there's a lot of decisions, there's going to be a lot of mistakes. I think that's unavoidable. And I do think that to the extent that they try and deal with the scale issue by automating their decision making, the automated tools, as far as we can tell, have really very limited utility in making these highly contextualized decisions. So maybe that will change in the future. Maybe those tools can get better. They are good at some things, but they're not good at other things. And there's just not enough person power available to put a human on every one of these decisions.
PM: And I would say, Twitter, for instance — you know, this is the subject for today — is so big that it’s one of those companies where I don't expect good customer service. But outside of something like this, in most cases, not only do we expect it as consumers, but we are going to make decisions to not support a company if it's bad; for many consumers, one bad experience is all it takes. And automation can be such a great thing. I just wrote an article about all the ways automation can help in marketing and customer experience. But, there's been so many instances of companies using automation, and having it go terribly sideways, because they didn't incorporate automation into an overarching strategy that does involve humans, and does want to actually kind of control the automation to do the right thing. Like if you just let it run, it's likely to not follow what a human would do.
DG: Yeah, I think there's been lots of studies also just about bias, automated decision making being really problematic. If you put in discriminatory data, if it's a discriminatory data set, which you have to really actually make an effort to avoid, you're going to get discriminatory decision making, even if that wasn't anybody's intention. And so there are a lot of issues with it. At the same time, there probably has to be some role for it, just because, again, the scale is so massive. It's just, what is that role? We hope it's not the ultimate decision making role. Or at least we hope that if it is they have confidence that the systems work. And as far as we can tell, and this is not specific to any one service, it seems automated decision making systems get rolled out without that confidence.
PM: I've seen instances where someone has had their account hacked, and needed the help of EFF and AccessNow to get it back. I've seen instances of people on both sides of the so-called political aisle being removed. And then I've seen, like I mentioned, hate speech being ignored completely. It's kind of a weird lead-in to this question. But, there is a difference between freedom of speech and allowance of hate speech? Where do you think Twitter and other social media platforms should draw the line between allowing people to express themselves? And then where is that line where you've now crossed it and you don't have that freedom anymore because of what you're saying?
DG: Well, I think there's two different issues. Right? And this answer will actually change depending on what global legal system you are subject to, but certainly under US law… Every legal system has the distinction between legal speech and unlawful speech. So one distinction a platform can make would be to say all legal speech can go on the platform, but all illegal speech we're going to remove, and they even might have a legal right, even a legal obligation, actually, to remove it, or at least remove it once they know about it. So that's one thing. But there's also a lot of speech that people don't like, that's perfectly legal, where the platforms aren’t going to have any legal obligation to remove it, but they might choose to remove it. And I'm really in favor of platforms actually just developing their own policies around this about what they want and what they don't want.
I think, to me, personally, as a user, I prefer a service that doesn't have hateful speech, because I don't like to interact with that. At the same time, there are some situations where it can be hard to distinguish hateful speech from non-hateful speech — a lot of times counter-speech to someone's hateful speech gets labeled as hateful speech because it refers to the same language, right? And we see a lot of mistaken takedown materials really just refuting somebody else's freedom of speech. So even if you choose to have a no-hate-speech policy, it can be difficult to enforce… I do think it's up to platforms to have their own policies around this. What we've found is that for almost every platform that exists, at least their stated policy is to remove hate speech, even though they have no legal obligation under US law to do so. Pick the ones that fly the big free speech flag, like Gettr or something like that, doesn’t allow hate — their terms of service or community standards are no hateful speech.
PM: I guess, in a Gettr instance, the “hate speech” would be a leftist coming on and saying something, right?
DG: I don't have their community standards right in front of me, but I think they specifically exclude Antifa. And they're very concerned about anti-religious speech, which, to me, is not a particularly left or right issue. But that's their specific concern, which some people at least are coming at it from a US political right perspective.
PM: Yeah, that's what I figured. So it's totally up to the people, like you said, running the platform, what is considered hate speech, and not outside of the legal stuff that you mentioned. So that leads me to the next kind of series of short questions, which is: Now, with someone at the helm who some people just admire greatly because of his success, but others look at where his success began, from his parents’ money in South Africa, I think more relevant is probably things like firing employees for posting videos of Tesla's malfunctioning, union busting. So there are a lot of people in business, entrepreneurs, who are very excited. What kind of new money-making options might there be on Twitter? Then there are a lot of people who are afraid, because of who it is, that maybe they might fall under hate speech in a way they never did before. I know you don't like to project or guess. Can you tell me anything? Maybe even summarize what you've seen? What are people saying about this that the general public or business owners should know?
"Anybody who says they know what he's going to do and how it's going to change Twitter, if it changes at all, really is just guessing." - David Greene
DG: Anybody who says they know what he's going to do and how it's going to change Twitter, if it changes at all, really is just guessing. So if nothing happened at all, that would be completely not surprising, because there's lots of people about bluster about things before they buy companies. And then once they get in and realize how it works, they realize maybe it was set up the best way it could be. He could also make completely radical changes. And I have no idea what's going to happen. So I think what people should look at is: do the actual stated terms of service change? Is that going to affect the way that businesses can use the site? How sites deal with advertising, for example — he could just completely change that, right? That, at present, is not very highly regulated, at least in the US. And there's an eCommerce application, right? Those are types of things that could completely be changed. It’s not something he's talked a lot about, but I don't think we can hold him to…
When he says he wants it to be free, it's hard to know what that means — if that just means he thinks that there should be less content moderation, or whether he just thinks it should be different. He's not someone who, in his past, has shown a general respect for freedom of speech, when people want to criticize him, or his companies, certainly, or even really freedom of speech as it's typically framed in the Human Rights framing, where we have a great concern for people who are denied opportunities to speak, who are not in power, who rely on freedom of speech as their only way of being able to participate in society. And to look at freedom of speech as just the freedom of speech of the powerful is a very cramped view of freedom.
There's a potential that nothing could happen but also potential for great changes that would completely change the way people engage with the platform. I think for businesses, again, advertising might completely change. Also, the people who are on it, the audience that's on it might completely change. I keep hearing stories of people saying, “Oh, people are abandoning Twitter…” I don't know if that's true or not, I'll tell you I haven't lost a ton of followers. But I don't have a ton.
PM: I lost about 150 from my second account, not associated with Customer Engagement Insider, and I heard from others they've lost 1000s. The DMs are wild right now, people talking about switching to Mastodon — that's one, and there's another one… CounterSocial is the other one that I'm seeing a lot of people move to. And these same people, what they're saying are twofold: just the general fear of the people who had been kicked off coming back.
And those people that had been kicked off were the ones involved in disinformation related to the election, for instance, Donald Trump being an example of an account that was eventually removed. And to be honest, I’m not on one side or the other, when it comes to freedom of speech. I'm not certain that these people should have been removed in the first place.
But the reason I bring it up is because a lot of people are certain they should have been removed, and to your point, tons of people are leaving. So, but also to your point, tons of people are going to either come now for the first time, or come back from wherever they went in the interim period. So, that is going to have a huge impact for advertisers, when, if their entire potential audience is completely different, they're going to have to rethink (a) what they’re advertising; (b) how they're advertising; and (c) if their audience isn't there anymore, should they be advertising at all?
Another thing I've noticed, in the conversation, since this news broke, is people making a joke that the first change that's going to happen is he's going to remove ad blockers, because he's a businessman. And a lot of people are very happy about their ad blockers. And it's not just on Twitter. Businesses have to be really strategic in order to reach audiences now, because of things like ad blockers, and I'll just mention that the ads on Twitter, at least predominantly, the ones that I see are examples of native advertising. Because they look like a regular tweet. They are in the stream of regular tweets. And then they just have a little note at the bottom that this is a paid ad. And, to me, that's a good type of advertising. Because what’s on the other end, at least ostensibly, is not just a sales mechanism. It's supposed to be a story. That's what content marketing is — it’s supposed to be something that people are excited to read about, irrespective of the brand. So my question for you is just really, what are your thoughts, actually, in general on native advertising on a Twitter platform where people are scrolling and, I think, probably very often missing the fact that it's an ad?
DG: I have to confess that I actually don't have a lot of experience viewing ads. Because I don't use it on my phone. And that tends to be where most of the ad service is done. I think that for consumer protection, that something that is an advertisement, or even something that is just paid content, should be apparent to the consumer. So to the extent that native advertising is causing consumer confusion, then I would be concerned about that, but if something is clearly labeled an advertisement and we know where it's coming from and we know that the author has a financial interest in it, to address the consumer confusion and anything else that might be misleading or deceptive about it, then I'm okay with it.
PM: And I think Twitter does do a decent job. It's how they designed it. And they do a decent job in terms of making it clear.
DG: I always thought the language of “sponsored” was a little bit of an understatement. But I do think that people using the service have some idea of what that means.
PM: And I'll just say, for what it's worth, from my time creating the content marketing division at The Associated Press, these conversations are like weeks long, like, over one little word, like, “Should we use sponsored? Should we change the color?” Because everyone's very uptight about it, and rightfully so. How do we offer this without turning a lot of people off? And I think they probably did the best they can and still turned a lot of people off. In my experience, I haven't found Twitter to be the most effective for promoting a business or brand or idea through paid sources. I think Twitter can be really good organically. But I've had much better success with LinkedIn, Facebook, Instagram, when it comes to paid.
DG: At least my experiences from being on Twitter occasionally is that the advertising I think is sort of easy to skip over, or adjust your settings so you don't see much, but it's a great place for brands to just engage and to have personalities, and to be a little snarky, and things like that, and that seems to build goodwill among people. And I think one of the things that will be interesting to watch would be if Twitter really does take a turn where it becomes a site that just has a lot of stuff people don't want or where it really feels like the only people still on it are people with sort of beyond-the-pale offensive ideas. How much of an associational cost will there be to a brand that keeps a brand presence on it? If they're afraid to be there just associating with the people who didn't leave Twitter. Again, I have no idea whether Twitter will become that, but it's an interesting thing to look for — whether there will become a certain negative toxicity from a business sense to continue to engage on it in that way.
PM: That’s a really great point. And we really don't know yet. I can't see that happening. It's just so big. But it's a really great point. I mean, like, if you're advertising on Gettr or 4chan or something, you know exactly who your audience is. Right now on Twitter, there's tons of audiences, and you have to filter to find the right one. And if that audience goes away, why are you there anymore?...
So, I've kept you for a long time. I imagine that you are being requested a lot right now. So let me just conclude with: in addition to your advice to just keep an eye on the terms and conditions, just parting advice for a regular old guy or gal, or neither, who wants to be on social media and Twitter. What would you advise that kind of person to do? And then lastly — obviously, you know where my focal areas are — what would you advise a Twitter employee? What would you advise a business advertising on Twitter? Or using Twitter, like Wendy's, for that snark factor?
DG: It's really just to pay attention to what happens. I really think everyone should at least pay attention if there are going to be changes in editorial policy, to pay attention to that and just make a decision whether the change makes the service either more attractive or less attractive, or you just would use it in a different way. I would also tell consumers: Musk said some things about authenticating humans, which we're not sure what that means, but one of the things that could be a big change would be the loss of the ability to be truly anonymous or pseudonymous on Twitter. So if you're a person who uses it and doesn't have the privilege of using it under your known identity, then you should pay attention…
Twitter historically has been good about fighting to protect user privacy. So when they get legal requests to disclose user information, they've been really good about fighting those. It would be important to see whether that changes. That might change how certain users or whether users use the site at all. One of the things we're looking for is for the new ownership, or new management if it has it, to reaffirm its commitment to the Santa Clara principles. So because everyone can keep a lookout for that.
PM: So it’s really ‘wait and see’ right now. Yeah, I think that makes sense. That’s what my advice would be as well. If you’re part of the group that jumps ship, then everyone’s going to jump ship. If everyone says “chill” then we can wait and see…
DG: Well there might be good reasons to jump ship.
PM: We just don’t know yet. So, wait and see, and keep your eyes open, right?
DG: Yes.
PM: Alright. Thank you so much, man. I really appreciate your time. I really appreciate everything you guys do. If you're interested in this kind of important work, protecting our freedoms and liberties, protecting our privacy and things like that while we exist in an increasingly online world. We enter the metaverse, these things are going to become increasingly more important for ourselves, and increasingly more important for businesses to understand because they're looking for that customer information — and what we as businesses can access is changing everyday. Thanks again. I really appreciate it. And good luck keeping an eye on all this.
Image Credit
Photo used in graphic licensed under the Creative Commons Attribution-Share Alike 4.0 International license: https://commons.wikimedia.org/wiki/File:Elon_Musk_Royal_Society_(crop2).jpg