Svoboda | Graniru | BBC Russia | Golosameriki | Facebook

Mike Masnick’s Techdirt Profile

mmasnick

About Mike Masnick Techdirt Insider

Mike is the founder and CEO of the Copia Institute and editor of the Techdirt blog.

He can be found on Twitter at http://www.twitter.com/mmasnick

https://twitter.com/mmasnick



Posted on Techdirt - 25 June 2021 @ 12:03pm

Texas Supreme Court Completely Confuses Section 230, Makes A Total Mess Of FOSTA

from the gotta-love-those-state-courts dept

So, this is... not great. Last year we wrote about a ridiculously bad ruling in Texas regarding a string of what certainly appear to be vexatious lawsuits that try to blame Facebook for sex trafficking. Texas's Supreme Court has now made its ruling on the matter and... it completely upends the limits of FOSTA by literally ignoring what the law explicitly says, and insisting it must mean something different. It is one of the strangest rulings I've ever seen.

The key issue is that Facebook sought a writ of mandamus, basically asking the Court to say "these lawsuits can't go forward because of Section 230." But that apparently requires the Justices on Texas's Supreme Court to read Section 230, as amended under FOSTA, and understand what it actually says. However, Justice Jimmy Blacklock apparently couldn't be bothered to do that. You can kind of get a sense of where this is going from the opening:

We do not understand section 230 to “create a lawless no-man’s-land on the Internet” in which states are powerless to impose liability on websites that knowingly or intentionally participate in the evil of online human trafficking. Fair Hous. Council v. Roommates.Com, LLC, 521 F.3d 1157, 1164 (9th Cir. 2008) (en banc). Holding internet platforms accountable for the words or actions of their users is one thing, and the federal precedent uniformly dictates that section 230 does not allow it. Holding internet platforms accountable for their own misdeeds is quite another thing. This is particularly the case for human trafficking. Congress recently amended section 230 to indicate that civil liability may be imposed on websites that violate state and federal human-trafficking laws. See Allow States and Victims to Fight Online Sex Trafficking Act (“FOSTA”), Pub. L. No. 115-164, 132 Stat. 1253 (2018). Section 230, as amended, does not withdraw from the states the authority to protect their citizens from internet companies whose own actions—as opposed to those of their users—amount to knowing or intentional participation in human trafficking.

I mean, it's true that Section 230 does not "create a lawless no-man's-land on the Internet." What it does is say that the law applies to the party actually breaking the law and not to the tool or service that they used to do so. But the end of the paragraph is already a bit confused about how FOSTA works, so that's a bad omen. Things are going to get silly in this ruling.

From there, Blacklock spews some nonsense about how Section 230 is not clear -- which is simply not true at all. Indeed, right after saying it's not clear, he admits that basically all the courts have decided to read Section 230 and promises that his court won't go against those rulings -- even though that's exactly what he's about to do. There's a long digression about Justice Thomas' random musings on Section 230 -- which go against what every other court has decided -- and then admits that (again) this court shouldn't go against what all the courts have actually said about Section 230, which disagree with Thomas' random unbriefed musings.

The ruling does reject some of the outlandish arguments from the plaintiff, trying to say that you can get around Section 230 with negligence and product liability claims. It correctly notes that those are in fact barred by Section 230.

It's then on page 23 that the ruling completely runs off the rails. At issue: does Section 230 pre-empt Texas' state laws regarding sex trafficking in civil cases like these. The obvious and only answer is "yes, it absolutely does." This isn't even remotely up for debate because this very issue was discussed and debated in the run-up to FOSTA. Some of the original FOSTA proposals included opening up Section 230 so that state sex trafficking laws would not be pre-empted by 230 -- but as many people pointed out, that would open up quite a mess, as such laws are drastically different in every state, and would create a massive loophole for mischief in Section 230. So, instead, Congress clearly and explicitly limited FOSTA to say that Section 230 would no longer apply to federal sex trafficking laws in civil cases. It's pretty clear from the text that was added to Section 230 in FOSTA:

(5)No effect on sex trafficking law

Nothing in this section (other than subsection (c)(2)(A)) shall be construed to impair or limit—

(A)any claim in a civil action brought under section 1595 of title 18, if the conduct underlying the claim constitutes a violation of section 1591 of that title;

You see how it specifically names 18 USC 1591? That's because that's the federal sex trafficking law. And, again, this was added explicitly after debate in Congress that included some proposals that would apply FOSTA broadly to all state sex trafficking laws -- and that idea was rejected by Congress -- and the law was written to clearly say it only referred to federal sex trafficking law.

But Justice Blacklock ignores all of that, and rather than looking at what the law actually says, notes that because the Plaintiff reads the law wrong, there's an open question here.

Both parties argue that FOSTA’s changes to section 230 support their positions. As Facebook understands FOSTA, the 2018 amendments carved out particular causes of action from the scope of what section 230 otherwise covers. These carved-out claims include a civil action under 18 U.S.C § 1595 and certain state criminal prosecutions but not civil human-trafficking claims under state statutes. Although a state-law claim under section 98.002 looks much like the federal cause of action created by section 1595, the similarity does not transform Plaintiffs’ statutory claims into suits “brought under” section 1595. In Facebook’s view, Congress’s “meticulous . . . enumeration of exemptions . . . confirms that courts are not authorized to create additional exemptions.” Law v. Siegel, 571 U.S. 415, 424 (2014).

Plaintiffs disagree. They concede that FOSTA does not explicitly except civil humantrafficking claims under state statutes from section 230’s reach. But FOSTA’s silence in that regard does not answer whether such claims fell under section 230 to begin with. According to Plaintiffs, the effect of FOSTA was not, as Facebook assumes, to carve out discrete claims that would otherwise have been barred by section 230. Instead, FOSTA reflects Congress’s judgment that such claims were never barred by section 230 in the first place. Under this reading, FOSTA’s “exception” to section 230 immunity for federal section 1595 claims is not merely an exception. Instead, it is Congress’s announcement of a rule of construction for section 230(c), under which human-trafficking claims like those found in section 1595 were never covered by section 230. In Plaintiffs’ view, by indicating that Backpage was wrong and that section 230 should not be interpreted to bar federal civil statutory human-trafficking claims, Congress must also have been indicating that analogous state civil statutory human-trafficking claims likewise are not barred. After all, there is no conceivable difference between the two categories of claims with respect to whether they “treat” defendants as “speakers or publishers.”

But... that's literally not even remotely true. FOSTA was absolutely to carve out discrete claims that would have otherwise been barred by Section 230. That was what the entire debate was about. The Plaintiff's lawyer is making shit up, whole cloth.

And Justice Blacklock bought it. This is embarrassing.

For two reasons, we find Plaintiffs’ view of FOSTA’s impact more convincing. First, what Facebook calls FOSTA’s “exceptions” to section 230 are not introduced with statutory language denoting carve-outs (such as “notwithstanding” or “except that . . .”). Instead, Congress instructed that “[n]othing in [section 230] . . . shall be construed to impair” certain claims. The U.S. Supreme Court, in interpreting a materially identical proviso, declined to view it “as establishing an exception to a prohibition that would otherwise reach the conduct excepted.” Edward J. DeBartolo Corp. v. Fla. Gulf Coast Bldg. & Constr. Trades Council, 485 U.S. 568, 582 (1988). Rather, the language in question “ha[d] a different ring to it.” Id. A clause stating that the provision to which it applies “‘shall not be construed’ to forbid certain [activity],” was, in the Court’s view, better read as “a clarification of the meaning of [the provision] rather than an exception” to its general coverage. Id. at 586. The Court agreed with the Eleventh Circuit, which had also understood the “shall not be construed” clause as “explain[ing] how the [section] should be interpreted rather than creating an exception” to it. Fla. Gulf Coast Bldg. & Constr. Trades Council v. NLRB, 796 F.2d 1328, 1344 (11th Cir. 1986). Other courts have construed similar statutory language in the same way.

Following this line of reasoning, we do not read FOSTA’s instruction that “[n]othing in [section 230] . . . shall be construed to impair or limit any . . . civil action brought under [18 U.S.C §] 1595” to merely except section 1595 claims from the scope of what section 230 would otherwise cover. Rather, the FOSTA proviso announces a rule of construction applicable to section 230. Congress’s mandate that section 230 not “be construed” to bar federal civil statutory human-trafficking claims necessarily dictates that section 230 must not be construed to bar materially indistinguishable state civil claims either. The elements of the two claims are very similar. If liability under federal section 1595 would not treat defendants as “speakers or publishers” within the meaning of section 230, it is hard to understand how liability under Texas’s section 98.002 could possibly do so.

As lawyer Ari Cohn notes, none of this makes any sense at all. The whole point of FOSTA was not a "rule of construction" to explain general ideas on what Section 230 would and would not apply to, but a very specific carveout for federal sex trafficking laws. It's in the freaking statute. They wouldn't name the law if that wasn't what they were specifically carving out.

Then, it gets worse. Justice Blacklock reads way more into the "sense of Congress" part of FOSTA than is reasonable.

Second, another textual indicator favors Plaintiffs’ understanding of FOSTA’s effects. The “Sense of Congress,” enacted as part of FOSTA’s text, was that “section 230 of the [CDA] was never intended to provide legal protection to . . . websites that facilitate traffickers in advertising the sale of unlawful sex acts with sex trafficking victims.” Pub. L. No. 115-164, § 2. If section 230 was “never intended” to immunize defendants against claims brought pursuant to 18 U.S.C § 1595, it stands to reason that the provision also never afforded immunity from analogous state-law causes of action....

That is completely misreading what Congress was saying here. They were explicitly carving out federal sex trafficking laws from Section 230 because (they claim) that the original Section 230 went too far in carving out that law. And that law explicitly. Because they name that law. And, it's not that they didn't think about state sex trafficking laws. Again, they did so. There was vigorous debate on that very point. And, the next two lines in the law both mention state sex trafficking laws, and when they may be used in criminal prosecutions regarding sex trafficking against an internet service provider. In other words, Congress wasn't just giving some rambly "oh all sex trafficking laws are exempt from 230" kind of message. It explicitly says that federal sex trafficking law is exempt from 230 for civil suits, and some aspects of state laws can be exempt if they match federal law in criminal cases.

It's impossible to read that and say "oh Congress actually meant that state laws were always carved out from Section 230 civil cases." If that were true, then why would it explicitly carve them out for criminal cases in the very next line? It's mind-bogglingly ridiculous that this is how Justice Blacklock read the law.

As for the whole "sense of Congress" point, that's basically meaningless, and Facebook tried to argue that, but Justice Blacklock says that you can use it if there's ambiguity. But there's no ambiguity in what the law says. Facebook pointed to what the law says. The plaintiff made up some nonsense argument. You don't just throw up your hands and say "well, there are two different ideas here, so it's ambiguous." It's not.

Even more bizarre then, is the conclusion, in which Justice Blacklock says it's up to Congress to modernize the laws -- which he just totally reinterpreted on his own.

The internet today looks nothing like it did in 1996, when Congress enacted section 230. The Constitution, however, entrusts to Congress, not the courts, the responsibility to decide whether and how to modernize outdated statutes. Perhaps advances in technology now allow online platforms to more easily police their users’ posts, such that the costs of subjecting platforms like Facebook to heightened liability for failing to protect users from each other would be outweighed by the benefits of such a reform. On the other hand, perhaps subjecting online platforms to greater liability for their users’ injurious activity would reduce freedom of speech on the internet by encouraging platforms to censor “dangerous” content to avoid lawsuits. Judges are poorly equipped to make such judgments, and even were it otherwise, “[i]t is for Congress, not this Court, to amend the statute if it believes” it to be outdated.

Except, uh, Congress did update the law in 2018, and that's the part that's being debated here -- and they updated it explicitly to say that civil suits only are exempted for federal sex trafficking law, and Justice Blacklock ignores that and throws up his hands, insisting that Congress really meant that state laws were never covered by 230, which is just ahistorical nonsense.

Of course, the case is not done with yet, and Facebook might still win (it should) on the merits as the case moves forward, but this just cut a giant hole in 230, and lawyers like the one driving this case are likely going to rush into that hole to file a ton of frivolous and vexatious lawsuits against all sorts of websites, claiming they violated sex trafficking laws. In theory, Facebook could appeal this to the Supreme Court, but that seems incredibly risky these days.

The real irony, of course, is that the only reason FOSTA became law in the first place was because of Facebook's strong support of the law, and now look how that's turned out?

Read More | 11 Comments | Leave a Comment..

Posted on Techdirt - 25 June 2021 @ 9:30am

Stop Using Content Moderation Demands As An Effort To Hide The Government's Social Policy Failures

from the look!-squirrel! dept

We've been seeing over and over again lately that politicians (and, unfortunately, the media) are frequently blaming social media and content moderation for larger societal problems, that the government itself has never been able to solve.

In other words, what's really happening is that the supposedly "bad stuff" that shows up on social media is really indicative of societal failures regarding education, mental health services, criminal law, social safety nets, and much much more. All social media is really doing is putting a spotlight on those failures. And the demands from politicians and the media for content moderation to "solve" these issues is really often about trying to sweep those problems under the rug by hiding them from public view, rather than looking for ways to tackle those much larger, much more difficult societal questions.

Over in Wired, Harvard law lecturer (and former Techdirt podcast guest), Evelyn Douek, has one of the best articles I've seen making this point. First, she describes how -- contrary to the narrative that still holds among some that social media companies resist doing any moderation at all -- these days, they're much more aggressive in seeking to strike down disinformation:

Misinformation about the pandemic was supposed to be the easy case. In response to the global emergency, the platforms were finally moving fast and cracking down on Covid-19 misinformation in a way that they never had before. As a result, there was about a week in March 2020 when social media platforms, battered by almost unrelenting criticism for the last four years, were good again. “Who knew the techlash was susceptible to a virus?” Steven Levy asked.

Such was the enthusiasm for these actions that there were immediately calls for them to do the same thing all the time for all misinformation—not just medical. Initially, platforms insisted that Covid misinformation was different. The likelihood of harm arising from it was higher, they argued. Plus, there were clear authorities they could point to, like the World Health Organization, that could tell them what was right and wrong.

But the line did not hold for long. Platforms have only continued to impose more and more guardrails on what people can say on their services. They stuck labels all over the place during the US 2020 election. They stepped in with unusual swiftness to downrank or block a story from a major media outlet, the New York Post, about Hunter Biden. They deplatformed Holocaust deniers, QAnon believers, and, eventually, the sitting President of the United States himself.

But, as the article notes -- especially on topics where we're learning new stuff every day, and early ideas and thinking may prove incorrect later -- it's proven that relying on content moderation to deal with these issues might not be that great an idea.

The chaos of 2020 shattered any notion that there’s a clear category of harmful “misinformation” that a few powerful people in Silicon Valley must take down, or even that there’s a way to distinguish health from politics. Last week, for instance, Facebook reversed its policy and said it will no longer take down posts claiming Covid-19 is human-made or manufactured. Only a few months ago

The New York Times had cited belief in this “baseless” theory as evidence that social media had contributed to an ongoing “reality crisis.” There was a similar back-and-forth with masks. Early in the pandemic, Facebook banned ads for them on the site. This lasted until June, when the WHO finally changed its guidance to recommend wearing masks, despite many experts advising it much earlier. The good news, I guess, is they weren’t that effective at enforcing the ban in the first place. (At the time, however, this was not seen as good news.)

She separately highlights how these efforts in the US are being used as an excuse for authoritarian governments around the globe to ramp up actual censorship and suppression of activists and dissident voices.

But the key point is that sweeping larger societal issues under the rug by hiding them doesn't solve the underlying issues.

“Just delete things” removes content but not its cause. It’s tempting to think that we can content-moderate society to a happier and healthier information environment or that the worst social disruptions of the past few years could have been prevented if more posts had just been taken down. But fixing social and political problems will be much harder work than tapping out a few lines of better code. Platforms will never be able to fully compensate for other institutional failures.

There's a lot more in Douek's write-up, but I think it's important for anyone who is debating the content moderation space to have read this piece and to at least account for it in any of these debates and discussions. It is not saying not to do any moderation. It is not saying that we should throw up our hands and do nothing. But it is making the very, very important point that content moderation alone does not solve underlying social issues, and yet so much of the focus on questions around social media and content moderation really are discussions about those failures. And we're not going to make progress on any of these issues if people don't understand what's the symptom and what's the actual disease.

12 Comments | Leave a Comment..

Posted on Free Speech - 24 June 2021 @ 1:25pm

DOJ Seizes Iranian News Org Websites; Raising Many Questions

from the seems-like-a-problem dept

Over the years, we've had many, many concerns about the US government seizing websites as it generally raises 1st Amendment issues (it's not unlike seizing a printing press). Of course, non-US citizens outside the US are not protected by the 1st Amendment, but that doesn't mean we shouldn't be concerned when the US government seizes news websites tied to foreign governments, even those with hostile interests to the US, like Iran. But that's exactly what happened.

When people first started tweeting about this, and showing the graphic that had replaced the websites, many people insisted that it was actually a hack rather than a US government takedown, but the DOJ has now confirmed that they did, in fact, seize these sites.

The DOJ claims they actually grabbed 33 such websites:

Today, pursuant to court orders, the United States seized 33 websites used by the Iranian Islamic Radio and Television Union (IRTVU) and three websites operated by Kata’ib Hizballah (KH), in violation of U.S. sanctions.

On Oct. 22, 2020, the Office of Foreign Assets Control (OFAC) designated IRTVU as a Specially Designated National (SDN) for being owned or controlled by the Islamic Revolutionary Guard Corps Quds Force (IRGC). SDNs are prohibited from obtaining services, including website and domain services, in the United States without an OFAC license. According to OFAC, the designation of IRTVU as an SDN was in response to the Iranian regime targeting the United States’ electoral process with brazen attempts to sow discord among the voting populace by spreading disinformation online and executing malign influence operations aimed at misleading U.S. voters. OFAC’s announcement explained that components of the government of Iran, to include IRTVU and others like it, disguised as news organizations or media outlets, targeted the United States to with disinformation campaigns and malign influence operations. 33 of the websites seized today were operated by IRTVU. The 33 domains are owned by a United States company. IRTVU did not obtain a license from OFAC prior to utilizing the domain names.

Three additional websites seized today were operated by KH. On July 2, 2009, OFAC designated KH an SDN, and the Department of State designated KH a Foreign Terrorist Organization. The announcements described KH as an Iraqi terrorist organization that committed, directed, supported or posed a significant risk of committing acts of violence against Coalition and Iraqi Security Forces. OFAC further explained that the IRGC provides lethal support to KH and other Iraqi Shia militia groups who target and kill Coalition and Iraqi Security Forces. The three domains operated by KH were owned by a United States company. KH did not obtain a license from OFAC prior to utilizing the domain names.

Of course, just last fall we had a similar story of the US government seizing domains that it said were spreading Iranian disinformation. We were concerned then and we remain concerned now.

First off, as Jameel Jaffer notes, while foreign governments have no right to spread disinformation, the 1st Amendment also does cover a right to receive information from abroad, and it seems like this could violate that.

But, even more to the point, this seems unlikely to end well. Governments seizing websites seems like the kind of thing that could escalate in ways that will backfire. Perhaps US companies will be protected, with most of their sites registered and hosted in the US, but especially with regards to some more exotic top level domains, that are technically country codes of foreign countries, you could certainly see efforts to similarly seize the domains of US companies in retaliation.

37 Comments | Leave a Comment..

Posted on Techdirt - 24 June 2021 @ 9:13am

Reason Shows How To Properly Respond To A Questionable Social Media Takedown: By Calling It Out

from the speak-up dept

Content moderation at scale is impossible to do well. I will keep repeating this point forever if I must. Now, I recognize that when you're on the receiving end of a content moderation decision that you disagree with, it's natural to feel (1) angry and (2) that it's a personal affront to you or a personal attack on your view of the world. This is a natural reaction. It's also almost certainly wrong. The trust and safety teams working on content moderation are not targeting you. They have policies they are trying to follow. And they need to make a lot of subjective calls. And sometime they're wrong. Or sometimes you just have a different view of what happened.

The publication Reason recently had a video pulled down from YouTube, and rather than freaking out and talking about how YouTube is "out to get" them, they instead wrote an article that clearly said that they support YouTube's right to make whatever content moderation decisions it wants, but also calmly explained why they think this decision was probably a mistake. As the article notes:

It remains essential to defend YouTube's right to make poorly reasoned and executed content moderation decisions; any government regulation of speech on social media is likely to backfire and hamper the free exchange of ideas. But it's also essential to recognize and critique censorious overreach if we want the market to respond to such errors. And a healthy market response is exactly what we need when the boundaries of acceptable discourse are being hemmed in by large companies susceptible to political pressure.

And, frankly, it's not that difficult to make some educated guesses on how the video ended up being moderated. It was a video from early in the pandemic about self-described DIY biohackers looking to see if they could create their own vaccines for COVID. Given what was known about COVID-19 at the time, and the speculative/experimental nature of DIY biohacking, some of the thoughts and ideas were probably a bit out there. The video described people who were trying to create their own "knockoff" versions of the mRNA vaccines (which have now proven to be massively successful), in part because of the (certainly at the time) reasonable belief that the FDA would be impossibly slow in approving such vaccines. In retrospect, that didn't really happen (though there are arguments about how the FDA could have moved even faster).

So, you can easily understand how a content moderation review of the content of such a video might flag it as potentially medical misinformation -- or even potentially dangerous. After all, it's talking about injecting a non-FDA approved "vaccine" (and one that, at the time, was highly experimental and hadn't gone through rigorous clinical trials). But, within the context (when it was done, what was being said, how it was framed), there's a strong argument that it should have been left up (and, indeed, has certain historical relevance to talk about the various approaches that people were considering early in the pandemic).

But, this is the very nature of content moderation and why we consider it so impossible to do well at scale. Context is always so important, and that can even include temporal context. Without thinking about the context when the video went up, it could appear to be more questionable a year and a half later. Or not. It's all pretty subjective.

But, Reason's response is the correct one. It's not blaming YouTube. It's not taking the decision personally, or acting like its viewpoints were systematically targeted. It recognizes that opinions may differ, that YouTube has every right to manage its platform how it wants, but also that Reason can use other means to push a response and counter-argument. If only others who felt similarly wronged were willing to do the same.

67 Comments | Leave a Comment..

Posted on Techdirt - 23 June 2021 @ 10:49am

As Everyone Rushes To Change Section 230, New GAO Report Points Out That FOSTA Hasn't Lived Up To Any Of Its Promises

from the garbage-in,-garbage-out dept

As you may have heard, tons of politicians are rushing to introduce new and different bills to undermine or repeal Section 230 of the Communications Decency Act -- a bill that is rightly credited for enabling a more open internet for freedom of speech. As you may recall, in early 2018 we had the first actual reform to Section 230 in decades -- FOSTA. It was signed into law on April 11th, with tons of politicians insisting it was critical to protecting people online. We had so many quotes from politicians (and a whole campaign from Hollywood stars like Amy Schumer) claiming (falsely) that without FOSTA, children could be "bought and sold" online.

One thing the bill did include (in Section 8) was a requirement that 3 years after the bill passed, the GAO should put out a report on how effective it has been. It's a few months late (the GAO does excellent work, but tends to be overworked and under-resourced) but on Monday the GAO finally released its study on the effectiveness of FOSTA. And... it basically says that all of the critics claims were exactly right.

Before FOSTA became law, co-author of Section 230, Senator Ron Wyden warned:

I fear that the legislation before the Senate will be another failure. I fear it will do more to take down ads than take down traffickers. I fear it will send the bad guys beyond the grasp of law enforcement to the shadowy corners of the dark web, where everyday search engines don’t go, but where criminals find safe haven for their monstrous acts....

[....]

In my view, the legislation before the Senate will prove to be ineffective, it will have harmful unintended consequences, and it could be ruled unconstitutional.

[....]

But the bill before us today will not stop sex trafficking. It will not prevent young people from becoming victims.....

So, now that we're three years in, what has the GAO found? That, just as predicted, the law was not at all necessary, and has barely been used, and the very, very few times it's been used in court, it's been ineffective:

Criminal restitution has not been sought and civil damages have not been awarded under section 3 of FOSTA. In June 2020, DOJ brought one case under the criminal provision established by section 3 of FOSTA for aggravated violations involving the promotion of prostitution of five or more people or acting in reckless disregard of sex trafficking. As of March 2021, restitution had not been sought or awarded. According to DOJ officials, prosecutors have not brought more cases with charges under section 3 of FOSTA because the law is relatively new and prosecutors have had success using other criminal statutes. Finally, in November 2020 one individual sought civil damages under a number of constitutional and statutory provisions, including section 3 of FOSTA. However, in March 2021, the court dismissed the case without awarding damages after it had granted defendants' motions to dismiss.

So... why did we need it again? Why was it so urgent? Why were Senators, Members of Congress, Hollywood stars, and others practically shoving each other aside to say that we needed this yesterday? And now, it's barely been used?

The report also shows -- again as many of us predicted -- that post FOSTA, law enforcement has had more difficulty tracking down those engaged in sex trafficking. Not because there is less trafficking, but because they're harder for law enforcement to find or access the details:

The current landscape of the online commercial sex market heightens already-existing challenges law enforcement face in gathering tips and evidence. Specifically, gathering tips and evidence to investigate and prosecute those who control or use online platforms has become more difficult due to the relocation of platforms overseas; platforms’ use of complex payment systems; and the increased use of social media, dating, hookup, and messaging/communication platforms.

The relocation of platforms overseas makes it more difficult for law enforcement to gather tips and evidence. According to DOJ officials, successfully prosecuting those who control online platforms—whether their platforms are located domestically or abroad—requires gathering enough evidence to prove that they intended that their platforms be used to promote prostitution, and, in some cases, that they also acted in reckless disregard of the fact that their actions contributed to sex trafficking.

Of course, in the runup to passing FOSTA, everyone kept talking about Backpage -- that FOSTA was needed to takedown Backpage, the company that everyone insisted was terrible. Except... as everyone now knows, the DOJ took down Backpage without FOSTA. Perhaps, the real issue had nothing to do with the law itself, and plenty to do with the DOJ not doing much on this issue. Or, worse, the DOJ recognizing that maybe Backpage wasn't the evil bogeyman that politicians and the press were making it out to be.

After all, leaked government documents later showed that Backpage was actually working closely with law enforcement to track down traffickers. So, guess what the report has found now? Thanks to FOSTA and also to the takedown of Backpage, the FBI is now finding it really really difficult to actually track down traffickers:

According to a 2019 FBI document, the FBI’s ability to identify and locate sex trafficking victims and perpetrators was significantly decreased following the takedown of backpage.com. According to FBI officials, this is largely because law enforcement was familiar with backpage.com, and backpage.com was generally responsive to legal requests for information. In contrast, officials said, law enforcement may be less familiar with platforms located overseas. Further, obtaining evidence from entities overseas may be more cumbersome and time-intensive, as those who control such platforms may not voluntarily respond to legal process, and mutual legal assistance requests may take months, if not years, according to DOJ officials.

So... end result: the government is barely using FOSTA and it's now significantly more difficult to find sex traffickers.

And, of course, the report doesn't even touch on the fact that things FOSTA did do to harm sex workers, putting them more at risk. It also doesn't talk about how lots of legitimate sites, such as dating sites and Tumblr, started aggressively blocking content that was likely perfectly legal, out of fear that FOSTA would open them up to criminal liability.

So, here's the big question: as a ton of politicians are pushing for big massive changes to Section 230, will they listen to us this time when we warn them about the possible consequences of such changes? Or will they dismiss us and insist that we're lying like they did last time? And who will go ask the politicians and Hollywood stars who swore that FOSTA was so absolutely necessary, how they respond to the fact that it didn't work, isn't being used, has made it more difficult to stop actual trafficking and has put actual lives in danger? Because all of that seems kind of important.

Read More | 30 Comments | Leave a Comment..

Posted on Techdirt - 23 June 2021 @ 5:36am

You Don't Own What You've Bought: Peloton Treadmill Edition

from the everything-costs-money dept

We've written so many stories about how you don't own what you've bought any more due to software controls, DRM, and ridiculous contracts, and it keeps getting worse. The latest such example involves Peloton, which is most known for its extremely expensive stationary bikes with video screens, so that you can take classes (usually on a monthly subscription). I will admit that I don't quite understand the attraction to them, but so many people swear by them. The company also has branched out into extremely expensive treadmills with the same basic concept, but that product has been in the news for all the wrong reasons lately, after a six year old child died in an accident with the device (for what it's worth, that article links to a page on the Peloton site where the article says Peloton posted an open letter to its customers about the accident, but the letter is no longer at that link).

The death kicked off an investigation by the US Consumer Product Safety Commission, which then told Peloton it should recall the treadmills and that people should not use them if there are children or pets nearby and apparently you should lock yourself in a room with them:

If people want to keep using the Tread+, they should only do so in a locked room and they should keep other objects away, the agency said. It advised people to unplug the treadmill while not using it and to keep the key to turn it on elsewhere and away from the reach of kids.

In a move that seemed guaranteed to generate bad PR, Peloton fought back against the recommendation calling it "inaccurate and misleading." It wasn't a very good look, and a few weeks later the company did, in fact, issue a recall -- though reports note that very few people will take the company up on the recall.

Not surprisingly, the company is also now facing some class action lawsuits. In that article, it notes that even for people who do not return their recalled Peloton treadmills, the company will issue a software update to try to maintain better safety and avoid children (and pets) from trying to use the device:

Peloton announced that they will refund the machine, which costs $4,295, and are working on a mandatory software update that will automatically lock the Tread+ after each use and require a unique password to be used to unlock the machine.

That automatic lock and password idea sounds sensible enough, given the situation, but in order to get it to work, but apparently Peloton hasn't figured out how to make that work for customers who bought the treadmill and aren't using its subscription service for classes. The Tread+ does have a "Just Run" mode, in which it acts like a regular treadmill (with the video screen off). But, as Brianna Wu discovered, the company is now saying that the "Just Run" mode now requires a subscription to work with the lock. The company is waiving the cost of such a subscription for three months, and it's unclear from the email if that means that after the three months they're hoping to have the "Tread Lock" working even for non-subscription users:

If you can't see it, the image is an email from Peloton customer support saying:

We care deeply about the safety and well-being of our Members and we created Tread Lock to secure your Tread+ against unauthorized access.

Unfortunately at this time, 'Just Run' is no longer accessible without a Peloton Membership.

For this inconvenience, we have waived three months of All-Access Membership for all Tread+ owners. If you don't see the waivers on your subscription or if you need help reactivating your subscription, please contact our Support team....

Now, it's possible that the subscription part is necessary to update the software to enable the lock mode, but that seems... weird. After all, there must have been some sort of software upgrade that locked out the "Just Run" mode in the first place.

And, obviously, you can understand why (given what happened), Peloton wants to make sure that everyone has upgraded with these new safety features. But the email is woefully unclear on whether or not after the three months of free membership, you'll have to start paying the $40/month to keep using the treadmill, or if it just becomes a quite expensive piece of weird furniture.

I get the need to deal with the risk of harm... but you'd think that the company would have done a better job of making sure it did so in a manner that didn't mean forcing people into a subscription they might not want. Indeed, as basically anyone could have predicted, once this started getting attention, Peloton promised that it was working hard to figure out a way to re-enable "Just Run" without a subscription. Of course, if that was always the plan, you'd think that the email would have said something, because otherwise, this concern was wholly predictable.

Either way, it's yet another reminder of how we don't truly own what we've bought any more thanks to such software locks and the ability to update things after they've been purchased, including taking away features. And that should concern everyone.

54 Comments | Leave a Comment..

Posted on Techdirt - 22 June 2021 @ 1:36pm

Bad Patents Getting In The Way Of A Fun Toy; Or Why I Had To Teach My Kids About How Patents Ruin Everything

from the well-that-sucks dept

Last year I backed a very cool looking crowdfunding project for my kids. It's called Makeway, and seems like the coolest ever possible marble run setup. Marble runs are already cool, but since basically everyone in my family will spend hours just staring at some of the more advanced marble run setups in museums (or building them in the more hands on museums, or much simpler ones with just home kits), this seemed like a really amazing project to be able to create a museum-level marble run in your own home. The project launched right before the pandemic went into full swing, and, like tons of crowdfunding projects, it's had some difficulties along the way. Of course, unlike many such projects in which the creators go quiet and hide behind silence as they deal with the difficulties, the guy behind Makeway sends out incredibly and intricately detailed novella length updates, going deep into the challenges and (usually!) the solutions.

Indeed, that part has been kind of fascinating -- especially to my kids, who actually get super excited each time a new update is sent and want to hear all the details of the project (indeed, learning about how difficult it is to create a product like this, and the effort the creators are making to get past those hurdles, seems like a good lesson for kids to learn). While they've been disappointed that the shipping of the product has been delayed, the updates are still neat, and I have every confidence that the product will eventually be delivered.

Except... not all of it. The latest update gave me a new lesson to teach my kids: just how stupid patents can be, and how they can mess up cool products. Buried in the middle of this latest epic update was one hurdle that simply could not be overcome: threats from patent holders. For a freaking marble run piece.

It's not a critical piece by any means -- it was more of a fun piece. Indeed, they called it the "party" piece. Basically as a marble would zip by, a fan would spin, and it could light up with a message and play music. Neat:

The Makeway guys really liked this part too:

The party part is our pride. We invested to make it sleek, elegant, always working with no buttons or switches, and mostly - working fluently, with a slight touch of the running marble. We hired a composer to make the music, we added our vocals to the track, we programmed the fan light to show Makeway's logo, we placed an order and fully paid for 2 IC's (the brain of the part) - one for the lights and one for the music, and to our delighted surprise, all this effort came up to a really satisfying part, that we included in many of our videos. We were very happy and even surprised when all were composed together into one working piece, and we were excited to start producing it to be able to ship it soon to our backers...

But, it's not to be. Apparently, they were threatened by someone with a newly granted patent (they don't reveal who it was, other than that it's by a competing marble run company -- though they do reveal that the lawyers refer to it as the '205 patent, meaning those are the last 3 digits of the patent number -- but a bit of poking around by me has failed to find the relevant patent Update: clever commenters have found the patent) from someone who claims that this little toy violates their patent:

Unfortunately, some time ago we got a letter stating that we are infringing, with this piece, a patent that was approved a few months ago and that is owned by a marble-run company. The patent describes a part that, when triggered by a marble, turns on light, and/or sound, and/or sends RF signals.

The Makeway guy says he explored a variety of options, but with the other company demanding a huge licensing fee -- on top of all the other challenges that this project has come across, it just couldn't work out.

A company at this gentle stage of stabilizing it's financial standards, in our size and with our resources, could waste all of it's assets trying to defend it's activity in those types of lawsuits. As hard as it is for us, we just can't take the risk to get lost in lawsuits instead of fulfilling Makeway and taking care of it's future. we had to sacrifice our best part in order not to get in financial troubles that might (and most likely) risk the future of Makeway.

And that's why, this week, I (for the first time) had to explain to my kids just what patents are, and just how damaging they can be. It's something I've obviously written about for years, but didn't expect that it would impact my kids at this age. And, yes, in the grand scheme of things, Makeway not being able to deliver this one fun, but not essential, part is not the end of the world. But it really does show how ridiculous patents are. Why does such a thing need a patent? It doesn't. It's clearly an idea that multiple people were coming up with. Just let everyone develop their own versions and compete in the marketplace.

29 Comments | Leave a Comment..

Posted on Techdirt - 22 June 2021 @ 9:11am

Disproving The Nonsense About The FBI & Jan. 6th Would Be Easier If The FBI Didn't Have A History Of Entrapping People In Made Up Plots

from the you-guys-made-this-worse dept

There's a very, very dumb conspiracy theory making the rounds -- and I want to be very clear on this -- that has zero evidence to support it, that the FBI was actually behind the January 6th invasion of the Capitol. It was originally reported by a wacky extremist news organization that I won't even bother naming here, and then got a lot more attention when Fox News made it a story via Tucker Carlson's show. The underlying confusion is that a (former Trump admin official who was let go after attending a conference with white nationalists but then later appointed to a new job within the Trump White House) reporter completely misunderstood what "unindicted co-conspirator" means in various charging documents.

What it generally means are people the government has not yet charged, and who they don't want to name so they don't tip them off (or where they don't yet know who they are, or don't have enough evidence to charge, or for a variety of other reasons). What it absolutely never means, is an undercover FBI agent or informant. Those people are not ever described as unindicted co-conspirator. But the reporter somehow got it into his head that this meant they were FBI agents, and then went to town with a conspiracy theory blaming the FBI for the insurrection, claiming that it was designed to "frame the entire MAGA movement."

As noted, this is false, and there is no evidence to support this. At all. It's a fiction of imagination from someone who has no idea what he's talking about, and of course Tucker Carlson ran with it, because that's what Tucker Carlson does.

But... here's the thing: it would be a hell of a lot easier to debunk this nonsense if the FBI (especially since 9/11) didn't have a depressingly long history of... setting up fake terrorist plots in order to entrap people to get big headlines around an arrest of someone who never had any means to actually carry out the attack. We've covered examples of these kinds of FBI activities for years. We've written about examples of this over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over again.

No doubt, what the FBI does in those cases is disgusting and highly questionable. It often involves them searching out people who are either mentally troubled or really desperate, and then proposing they get involved in a completely fictional terrorist plot -- a plot that the individuals would have no possible chance of actually carrying out on their own. The undercover FBI agents (or the confidential informant working for the FBI) then proceed to do all the actual "planning" including buying any of the necessary materials and getting all the details in order. Then, after the planning has reached a certain point and the sucker is bought in on the plan, they're arrested, and the FBI claims it "stopped" a terrorist attack -- which usually gives the FBI lots of glowing press attention.

Of course, the reality is that there was no threat. There was no actual plot. There is never any ability to actually carry anything out. The weapons or bombs or whatever are all faked or never actually in existence. It's all a shadow play so the FBI can try to get some headlines and pretend they're doing something.

But that's clearly not what happened with January 6th. For one thing, the events of January 6th actually happened. The Capitol was actually invaded. Damage was actually done. If the FBI was planning it as per their usual homegrown plots, no actual attack would have happened. Also, if you look at the pattern of who the FBI has gone after with these plots... it's not really been the Trump supporting MAGA militia type.

Either way, though, people wouldn't have to be doing this big silly debunking of this kind of nonsense conspiracy theory if the FBI didn't actually have a track record of doing this kind of thing over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over again.

So, you know, perhaps they should stop doing that.

44 Comments | Leave a Comment..

Posted on Techdirt - 21 June 2021 @ 1:32pm

As Predicted, Smaller Media Outlets Are Getting Screwed By Australia's Link Tax

from the exactly-as-we-warned dept

Ever since the giant news organizations, led by Rupert Murdoch's News Corp., began pushing the ridiculous idea of forcing Google and Facebook (and often just Google and Facebook) to pay a "link tax," we've been pointing out that while this might be a windfall of free money for the news giants, small news organizations (like, um, us) would likely get totally screwed over. With Australia leading the charge of silliness and passing its link tax, we're discovering that our predictions were exactly correct.

The big Australian publishers, News Corp. and NINE, are making out like bandits, while the smaller publications? Not so much.

A long-term commercial deal between Facebook and Google and Guardian Australia is expected to be completed in a matter of days, adding to a raft of agreements struck between large tech companies and major media outlets since February. While companies like Nine Entertainment Co, News Corp Australia and Seven West Media are already implementing plans off the back of the deals, there is increasing concern among smaller companies that they still have not been remunerated fairly.

Of course, this isn't really surprising. In fact, the real worry should be that the administrative costs for the internet companies to have to figure out how to compensate smaller publishers is so unworthy of the hassle, that those smaller publications will just start to be excluded en masse from Google and Facebook, once again serving the interests of the largest publishers, and not actually helping the cause of journalism at all.

16 Comments | Leave a Comment..

Posted on Techdirt - 21 June 2021 @ 9:27am

No, Facebook's Argument In Response To Muslim Advocates' Lawsuit Is Not 'Awkward'; Facebook Caving On 230 Is What's Awkward

from the so-much-silliness dept

Mother Jones has a slightly weird article saying that Facebook is making an "awkward legal argument" in a lawsuit that was filed against the company by Muslim Advocates, arguing that Facebook and its executives lied to Congress when it insisted that the company would remove hate speech. There's a lot to unpack here, though I'd note that there are two things I find awkward here -- and neither of them are Facebook's legal arguments in the case. The real awkwardness is Muslim Advocates trying to argue that Facebook failing to remove certain content violates consumer protection laws. The second awkward bit is Facebook's constant political posturing about its openness to Section 230 reform.

Let's dig into the case, though. The complaint from Muslim Advocates (and filed by a lawyer who is a long-term critic of Section 230) is fairly straightforward. It says that Facebook's execs have testified before Congress that the company removes content that violates its policies. Yet, when Muslim Advocates alerted the company to content that it believed violated Facebook's policies, the company did not always remove it. Ergo (the complaint says), it means that Facebook's execs lied to Congress... and somehow that violates DC's consumer protection laws.

There's plenty here to roll your eyes about. There is no doubt that (tragically) there is plenty of hate speech on Facebook directed at Muslims (and many other groups). It is also true that content moderation is impossible to do well at scale, and that (1) mistakes will be made and (2) lots of people will disagree with Facebook's interpretation of its own rules. And just because Facebook testifies that if it becomes aware of content that violates its policies, it will take it down, if someone else believes that content violates Facebook's policies, but Facebook doesn't take it down, that does not mean that Facebook lied to Congress. It just means that there are differing interpretations of Facebook's policies, and Facebook is the one who gets to have the final say on that.

The lawsuit, obviously, argues otherwise. I find that argument to be kinda silly. And, if it actually wins the day in court, it would be tremendously problematic for the open internet. Enabling basically anyone to sue a company for not taking down content that the person (but not the company) believes violates policies is a recipe for (1) a ton of frivolous, wasteful litgation and (2) the creation of a near automatic heckler's veto for almost any content online. That would be very, very bad.

Also, the specific claims are kinda weird. How is it a "consumer protection" violation? Well, according to the lawsuit:

The CPPA establishes a right to truthful information from merchants about the consumer goods and services that they provide to people in the District of Columbia.

And thus, because this group claims Facebook lied to Congress, that somehow means that it did not provide "truthful information... about the consumer goods and services they provide." That... is a huge stretch. There are also claims of fraudulent and negligent misrepresentations.

Facebook has responded with two separate motions to dismiss. One is a typical 12(b)(6) motion to dismiss for failing to state a legitimate claim. The second is a separate motion to dismiss under DC's anti-SLAPP law. There are lots of interesting arguments made in both of those filings (some of which overlap), but the crux of the defense is exactly as you'd expect: (1) no one at Facebook said that they'd be perfect in moderating and (2) if Facebook disagrees with some 3rd party about whether or not some content violates Facebook's policies, that's not evidence of any lie.

Billions of people use social media to express themselves, which means that content reflecting the full range of human experience finds expression on platforms like Facebook Facebook agrees with Plaintiff Muslim Advocates that anti-Muslim hate speech is vile, and devotes significant resources to keeping such abuse off its platform based on Community Standards that outline what is and is not allowed on Facebook. Enforcement of the Community Standards requires being aware of potentially violating content, ether through Facebook's own efforts or reports by third parties, and making judgments as to whether that content should be removed as violating the Community Standards. As Facebook has candidly acknowledged, these judgments are subject to disagreement and error, but Facebook remains committed to making its service a place where people feel safe to share with others and express themselves.

Managing a global community in this way has never been done before. Facebook is committed to continuing to improve its enforcement efforts and believes that means engaging Congress and other stakeholders to share and seek input on its policies and practices. As part of this ongoing dialogue, Facebook executives have testified before Congress regarding the Community Standards,

The part of the defense that caught Mothers Jones' interest is that Facebook (correctly) note that this lawsuit is clearly barred under Section 230. And that does seem pretty clear. It's not awkward at all.

...all of Plaintiff's claims are barred by Section 230 of the Communications Decency Act, 47 U.S.C. § 230 (“CDA”), because they seek to impose liability on Facebook for not removing third-party content that Plaintiff believes should be removed. Plaintiff attempts to plead around the CDA by bringing misrepresentation claims, but it is clear from the Complaint that Plaintiff is challenging Facebook's alleged failure to remove certain third-party content that Plaintiff believes violates the Community Standards. These are editorial decisions that go to the core of conduct protected by the CDA.

Mother Jones claims that this is an awkward legal argument because of Facebook's openness to reforming Section 230. But, even for those of us who don't trust Facebook's proposal for reform, there is no indication at all that if Facebook got what it wanted out of 230 reform that this case wouldn't still be barred by it. Facebook's reform proposal is basically that if it engages in best practices regarding content moderation, it still gets 230 protections. And even if Muslim Advocates disagrees, Facebook can make a pretty strong case that it engages in "best practices" regarding content moderation. Indeed, Facebook's proposal also made clear that no reform should punish a company for missing any particular pieces of content.

So, there's nothing in Facebook's legal arguments that goes against its own advocacy regarding 230 reform. So it's difficult to say why that legal argument is "awkward." It's not. It's just Mother Jones trying to spin this into a story -- which is pretty disappointing. Especially considering that Mother Jones has been so active in the good fight for stronger and better anti-SLAPP laws which (as the other filing in this case shows) would protect Facebook here, since this lawsuit seems (also) to be an attempt to punish Facebook and its execs for their speech at Congressional hearings (which is a classic kind of SLAPP situation).

If anything, the "awkward" part is why is Facebook continuing to be so willing to throw Section 230 under the bus, when cases like this (and so many others) show why it totally makes sense and does what it needs to do in making sure that websites can moderate without fear of facing liability for their many, many difficult subjective choices. Of course, we all know the real reason Facebook is doing this: because the politics of the day means that it has to "give" something here since so many people are mad at the company, and Facebook has (unfortunately, probably correctly) realized that if it undermines 230, it can do so in a manner that Facebook can survive, and its smaller competitors cannot.

The rest of the motions to dismiss are worth reading as well, as they deftly call out the silliness of the complaint, including the fact that when Facebook execs say that they remove content that violates policies, that is only after (1) they're aware of it and (2) they, themselves, determine if the content actually violated the policies, something that is inherently subjective:

Contrary to Plaintiff's assertion that Facebook executives represented in Congressional testimony that Facebook removes all content that violates the Community Standards, that testimony makes clear that enforcement of the Community Standards depends on Facebook being aware of potentially violating content and making judgments that are subject to disagreement and error.

As for the argument that this is a consumer protection issue, Facebook notes that that law is about the sale of products, which just doesn't apply here at all:

Plaintiff cannot state any claim under the CPPA because it regulates conduct arising out of consumer-merchant relationships, and Plaintiff does not, and cannot, allege any such relationship with Facebook, or that the alleged misrepresentations were made in connection with the sale of goods or services to Plaintiff or anyone else.

As noted, there's a lot more detail in the filings that is worth reading, but this should give you the gist of both sides of the argument. This lawsuit seems an unfortunately silly one by Muslim Advocates, and frankly undermines the work that the organization does. And, if Facebook wins the anti-SLAPP argument (which is certainly possible), then the organization might even end up on the hook for Facebook's (I'm sure quite expensive, given the multiple well known lawyers it has working on this case) legal bills.

There is one separate thing that is probably worth noting in this case: it does have some similarities to the somewhat infamous Barnes v. Yahoo case in the 9th Circuit, in which the court ruled that via "promissory estoppel," a plaintiff could get around Section 230. In that case, the plaintiff spoke to someone at Yahoo who promised them they would remove some content, but then did not. In that case, the court said that once an employee promised the plaintiff that the content would be removed, the company loses the 230 protections.

However, this case strikes me as notably different in multiple ways (and, of course, is not bound by an already problematic 9th Circuit ruling, since it's in DC superior court). In Barnes, there were not only specific pieces of content that the plaintiff alerted Yahoo to, but then the employee told the plaintiff that the company would "take care of" that content. So that was the promise. Here, the plaintiffs are trying to take broad statements regarding Facebook's content moderation practices to Congress and trying to say that this constituted some sort of binding promise to never be wrong or never disagree with Muslim Advocates' own subjective opinion. And that's just silly.

So, in the end, we have an awkward basic legal argument from Muslim Advocates, and an awkward bit of political posturing by Facebook with its publicity campaign to "reform" Section 230. What is not awkward at all is Facebook's legal response to this silly lawsuit.

Read More | 30 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2021 @ 10:51am

Devin Nunes' Family's Bizarrely Stupid Defamation Lawsuit Goes Off The Rails

from the wow dept

As you may recall, Rep. Devin Nunes has been involved in a bunch of totally frivolous SLAPP suits that seem designed to try to intimidate journalists from writing stories criticizing Devin Nunes. A key one that seems to have gotten deeply under Nunes' skin is an Esquire piece from a few years ago entitled Devin Nunes’s Family Farm Is Hiding a Politically Explosive Secret written by reporter Ryan Lizza. In the fall of 2019 he sued over that article, and a few months later his family sued over it as well.

To say it hasn't gone well for Nunes would be an understatement.

As a reminder, the article claims that the "politically explosive secret" is just the fact that, despite Nunes repeatedly pitching himself as a California farmer, his family packed up the farm and moved it to Iowa a while back. Much of the article is about how they appear to have worked over time to try to hide that:

So here’s the secret: The Nunes family dairy of political lore—the one where his brother and parents work—isn’t in California. It’s in Iowa. Devin; his brother, Anthony III; and his parents, Anthony Jr. and Toni Dian, sold their California farmland in 2006. Anthony Jr. and Toni Dian, who has also been the treasurer of every one of Devin’s campaigns since 2001, used their cash from the sale to buy a dairy eighteen hundred miles away in Sibley, a small town in northwest Iowa where they—as well as Anthony III, Devin’s only sibling, and his wife, Lori—have lived since 2007. Devin’s uncle Gerald still owns a dairy back in Tulare, which is presumably where The Wall Street Journal’s reporter talked to Devin, and Devin is an investor in a Napa Valley winery, Alpha Omega, but his immediate family’s farm—as well as his family—is long gone.

The article also discusses a bunch of other oddities about the Nunes' farm in Iowa, and while it never comes out and directly claims that the farm hires undocumented workers, it does note that most other farms in the area do. This point has become somewhat important in the case.

Devin Nunes' own part in the case is effectively over as the judge dismissed it last summer, pointing out absolutely nothing Nunes claimed was defamatory actually was defamatory (Nunes is appealing, because of course he is, but it's hard to see much of a chance of the case being reinstated). And while the judge had made it clear that the lawsuit by Nunes' family was on shaky ground, the Nunes' family and their lawyer, the infamous Steven Biss, decided to keep the case going.

The only claim that has survived in the case is the one where Nunes' family says it is defamatory due to the implication that the farm has employed undocumented workers. So, as would be expected, one of the things that Esquire's publisher, Hearst, wished to do was to depose the workers on the farm to establish their documentation. Last month, it became clear that something nutty was going on after Hearst filed quite a document with the court, about its efforts to depose the workers from NuStar farms. Much of the filing is redacted, but you can still get a sense of the frustration:

This Motion comes in the wake of an unusual and troubling series of events in this case, which were previewed for the Court during last week’s telephone conferences with Judge Roberts....

Reading through the details (and especially the declaration of one of Esquire's lawyers) strongly suggests (though the redactions make it a little tricky to parse out) that Biss has played games to try to keep NuStar's employees from giving depositions. This includes questions about whether or not Biss would accept service on behalf of those employees and also whether or not he would represent those employees.

Reading those links suggests the case was already turning into something of a clusterfuck, and apparently on Thursday it all blew up as the magistrate judge on the case benchslapped Biss and told him to stop playing games (first reported by the Fresno Bee, whose parent company was also sued by Nunes, and which has done some great reporting on these cases).

The order from the magistrate judge details what happened when Hearst's lawyers were finally able to depose the NuStar employees and... um... wow.

Defendants noticed the depositions of six of Plaintiffs’ current employees and had them served with subpoenas duces tecum that required them to bring identification to their depositions. Plaintiffs’ counsel, Steven S. Biss, accepted service of the subpoenas on behalf of the employees, but Plaintiffs arranged for separate counsel, Justin Allen, to represent the deponents. F.S.D. was the first such witness to be deposed on May 12, 2021.

While Defendants’ counsel was questioning F.S.D. about his purported signature on various documents, Mr. Allen stated, “I’ve advised my client to invoke his Fifth Amendment right regarding questions about this document. [F.S.D.] --- ” (Doc. 103-8 at 20 (Dep. pp. 71-72).) Mr. Biss then interrupted stating, “Hold on. Hold on. Can we go off the record for just a minute? I’d like to talk to Justin before we do this.” (Id. (Dep. p. 72).) In fact, the deposition was delayed for much more than just a minute. More than two hours later, the deposition resumed. When Defendants’ counsel attempted to make record, Mr. Biss interrupted him several times insisting that Mr. Allen would make a statement and the deposition would be rescheduled. Once Mr. Biss got his way, Mr. Allen stated,

I am not going to allow [F.S.D.] to answer that question because when we left it I advised him to invoke his Fifth Amendment right. We took a break. We went off the record, and we’ve had several conversations with lots of people and I’ve talked to [F.S.D.], and as of now I am no longer representing him. I am not his lawyer.

(Id. at 21 (Dep. pp. 74-75).) The depositions were then halted. At the hearing, Mr. Biss stated that a new lawyer had been retained to represent the employee witnesses at their depositions, but he could only identify the new attorney by her first name, Jennifer. Mr. Biss was ordered to provide her name to opposing counsel and the Court. To date, I have not received that information.

There are so many "wow" elements in there, and also plenty of things on the "these are things you should not do during a deposition" list. Just the fact that a judge would recount that in an order is kind of incredible, and suggests that the court is already both aware of and sick of Biss's antics.

The judge orders the employees from NuStar to actually comply with the subpoenas, and seems to suggest that Biss failed to inform the employees of their obligations under the subpoena until the morning of the deposition (another wow moment):

Although the subpoenas had been timely served and no objection was raised, apparently F.S.D. first learned of the deposition on the morning it was scheduled, he had not been shown the subpoena, and he did not appear with the requested documents.

So the court doesn't just order that the employees comply with the request to produce documents, but tells Biss to make sure that the employees are properly informed of them:

To avoid a repetition of this problem at upcoming depositions, Mr. Biss and any attorney retained for the employees will inform the employees of their obligation to search for the requested documents and bring the documents to the deposition, if they still possess them. Mr. Biss and any attorney retained for the employees will also advise the employees that the Court has ordered this production and employees may be asked about their efforts to comply at the deposition.

And then the magistrate judge addresses Biss's behavior. And you can tell he's not happy.

Defendants complain about Mr. Biss’s behavior during the deposition of F.S.D. Particularly, Defendants assert that Mr. Biss asserted argumentative objections that were disruptive and intended to intimidate or coach the witness. Mr. Biss asserts that his objections were proper and “intended to call out the Defendants’ overt harassment of the NuStar employee.” (Doc. 107 at 10.) Mr. Biss’s further explanation on this issue is puzzling and troubling:

No effort was made to “signal to the witness how to answer questions” or to “coach[] the witness to testify in a certain way.” Counsel for the Defendants got answers to all his questions, including those about [F.S.D.’s] traffic tickets. The deponent was never instructed not to answer. Indeed, he wanted to answer all questions. Plaintiff’s counsel sought a side bar with counsel for the witness to determine whether the witness wanted to take the Fifth Amendment. The witness did not, which is why the witness terminated the lawyer with absolutely no prompting by Plaintiffs’ counsel.

(Id. (brackets in original).) During the deposition, Defendants’ counsel was asking questions about documents such as a bond F.S.D. had posted and a traffic ticket he had received that bore his signature. Mr. Biss made a lengthy speaking objection claiming this was harassment. (Doc. 103-8 at 19 (Dep. pp. 66-67).) Here, where the identity and immigration status of the employees is a central issue, it is not harassing or irrelevant to ask questions about such documents. In the context of this case, it is not conducive to obtaining truthful answers from an employee such as F.S.D. to have his employer’s lawyer making lengthy, animated objections to those questions.

The most puzzling and troubling aspect of Mr. Biss’s explanation, however, is the representation that he “sought a sidebar with counsel for the witness to determine whether the witness wanted to take the Fifth Amendment.” (Doc. 107 at 10.) This two-hour “sidebar” occurred immediately after Mr. Allen stated, “I’ve advised my client to invoke his Fifth Amendment right regarding questions about this document.” (Doc. 103-8 at 20 (Dep. pp. 71-72).) Normally, one would expect the lawyer for a deponent to be in the best position to ascertain whether the deponent desires to assert a privilege. There is no record of the sidebar, only Mr. Biss’s protestations that the employees are not being pressured regarding their rights under the Fifth Amendment. Mr. Biss makes bald assurances that the employees want to answer all questions and not assert their Fifth Amendment rights. Nevertheless, Mr. Biss’s behavior—coupled with the facts that (a) the privilege was raised, (b) the privilege was perhaps withdrawn after a lengthy sidebar, and (c) Mr. Allen was fired—gives me little confidence that F.S.D. could make a knowing waiver of his Fifth Amendment rights under these circumstances.

The judge notes that he can appoint a lawyer for the employees, but since Biss insists that "Jennifer" has been retained, for now he will resist the temptation to appoint them counsel. However, "if concerns arise about the exercise of independent judgment by the attorney replacing Mr. Allen, I may reconsider the necessity of appointing counsel."

It also concludes with this oddity:

Plaintiffs raised a related concern. Plaintiffs explained that they had not identified new counsel previously out of concern Defendants’ attorneys will contact the new lawyer to intimidate him or her or threaten ethics violations. (Doc. 107 at 8 n.5.) At the hearing, I expressed my belief that, if I were in the new lawyer’s shoes, I would welcome communications from counsel familiar with the case and the underlying documents as I prepared to independently evaluate my clients’ potential legal jeopardy.

So, once again, Biss's actions don't seem to be doing him any favors, yet haven't reached the point at which he gets sanctioned for his behavior either. Sometimes it truly is stunning how much leeway a court will give certain lawyers. Still, none of this is good for the Nunes family and their case.

19 Comments | Leave a Comment..

Posted on Techdirt - 18 June 2021 @ 5:52am

Letting Newspapers Band Together To Demand Payments From Internet Companies Is Bad For The Internet And Bad For Journalism

from the bad-ideas dept

In the wake of Australia getting its ridiculous, anti-open internet link tax passed into law, the push to create similar such laws everywhere else has gone into overdrive. In the US, the main driver of this effort (which has been pushed by legacy newspaper giants) has been an antitrust exemption that would allow the newspapers to collude, in order to put up (what they think is) a joint effort to demand that Google and Facebook pay them for links. The supposed "antitrust" wing of the Democratic party, David Cicilline in the House and Amy Klobuchar in the Senate, have decided that this is a good idea and introduced the Journalism Competition and Preservation Act (JCPA) (here's the House version). Leaving aside the oddity of thinking that the best way to deal with what you believe are dominant firms is to allow other firms to collude and avoid antitrust laws, the entire proposal is silly, and potentially destructive to the open internet.

Public Knowledge has put together a letter to Congress explaining why (our think tank, the Copia Institute, has signed onto the letter). In a separate blog post, Public Knowledge notes that while it as an organization has been largely supportive of Cicilline and Klobuchar's antitrust efforts around the tech companies (something we at Techdirt are somewhat less convinced by), this bill is a complete disaster.

The key part is exactly what we highlighted was wrong with the Australian law. The idea that this is a competition issue and that newspapers need to be able to band together to have enough clout to negotiate a price for linking to their stories has a totally false underlying assumption, that there's some underlying right to be paid for links. The whole nature of the open internet is that you don't need permission or a license to link to someone else. But this bill seems to think that's not true. And that's a problem.

The JCPA would create an antitrust law exception to allow certain publishers the ability to jointly negotiate business terms with major online platforms. Notably, it does not otherwise alter substantive law. However, no individual news publication currently has any legal right (via copyright, or any other statute) to prohibit third parties from linking to their content. Nor does banding together to collectively negotiate give such a right. In other words, a cartel of news sites is exactly as powerless to prevent Facebook or Google from linking to its members’ content as a small site would be negotiating on its own.

And that could lead some -- perhaps even courts -- to think that this bill actually alters copyright law to mean that you do need a license to link, and that would be horrific for the open internet.

This central disconnect means that the structure of the bill does not achieve its stated legislative aims. As such, we are concerned that this bill could be interpreted by courts to implicitly change the scope of copyright, expanding the exclusive rights that news publications enjoy in their material beyond what any copyright owner has ever enjoyed. To the extent that this creates a new substantive right to demand that material not be linked to, this is unwise; to the extent that it interferes with fair use rights, particularly of the rights of users of platforms, it is unconstitutional and violative of our international obligations.

The ability of one website to connect (“link”) to other websites, without needing to negotiate to do so, is a foundational component of modern internet infrastructure. Linking is not, and has never been, an act within the scope of copyright. It is not within the statutory or common-law ambit of copyright law, as merely linking to a piece of external content is not a reproduction, display, performance, or distribution of that content. As such rightsholders do not--and should not--have the ability under copyright law to prevent third parties from linking to their publicly available content. (Notably, the vast majority of rightsholders do not want such a right, and those that do already have technical methods which allow them to do so.)

Once again, what we're seeing with Klobuchar and Cicilline is that they're so deeply infatuated with the idea that "big tech is bad" that they fail to bother to look at the details of their own proposals and what they would mean for the open internet.

Read More | 45 Comments | Leave a Comment..

Posted on Techdirt - 17 June 2021 @ 10:49am

Australian Official Admits That Of Course Murdoch Came Up With Link Tax, But Insists The Bill Is Not A Favor To News Corp.

from the did-he-just-say-that-out-loud? dept

Earlier this year, we wrote a lot about the ridiculous anti-open internet Australian link tax that is now being pushed elsewhere around the globe. Anyone paying attention to the details knew that it was extreme crony capitalism at work, with the government forcing one set of massive companies (namely, Facebook and Google) to pay another set of massive companies, led by Rupert Murdoch's News Corp and Nine. For all the talk of how big tech companies are "monopolies," if you look at Australia's news companies, it's considered among the most concentrated in the world, and has been quite profitable for the likes of Murodch.

And while defenders of the bill insist (incorrectly) that the bill is not a link tax, but is merely a "competition bill" to help those few giant newspaper companies "better negotiate" with the giant internet companies, that's bullshit for two reasons. First, it's a "negotiation" to pay for links, and no one should ever have to pay to link to some other site. That's just fundamentally against the concept of an open internet. Second, it's no real negotiation because if Facebook and Google fail to agree to a deal that satisfies the Aussie media bosses, the government can step in and force an agreement on them.

Lots of people -- including those in Australia -- noted that this all seemed like a scheme to make Rupert Murdoch richer. And now the Australian competition official, Rod Sims, who "oversaw drafting of the law" has flat out admitted that the whole thing was Murdoch's idea in the first place, though he insists it's "extremely strange" that anyone thinks it's a favor to Murdoch.

Australian Competition and Consumer Commission (ACCC) chair Rod Sims, who oversaw drafting of the law, acknowledged the negotiating system was proposed by the Rupert Murdoch-controlled publisher but said all major media operators in the country supported it.

I mean, yeah, of course they supported it. Because it's the government forcing other companies to give them free money in response to their own failures to innovate. Why wouldn't they support it?

It is true that Google and Facebook are bigger than News Corp., which is the point that Sims really really wants to focus on. But that doesn't even touch on whether or not it's appropriate to force one set of companies to pay for something that should be free (linking), to another set of companies that are still making a shit ton of money on their own.

"News Corp is 1% the size of Google. News Corp is one of four main media companies (in Australia). It's very likely not the one with the biggest reach. I just think this is a line put out by Google," Sims added.

"There were many people giving us ideas. News Corp was but one. This whole notion that this is about News Corp is extremely strange."

You literally just admitted that the idea came from News Corp! It wasn't "a line put out by Google." It was you, who just admitted what was obvious to anyone who's been paying attention to Murdoch for years. After all, Murdoch has been publishing op-eds (in his own company's publications, of course), demanding Facebook and Google pay him for years. It's not like he made it a secret.

Can you make an argument that Google and Facebook are too powerful? Sure, absolutely. But, can you then make the argument that these companies which found a way to build internet services billions of people like... should be forced to pay for Murdoch's brand of propaganda, despite there being no fundamental reason that he deserves any of that money? Not unless you want people to think you're in Murdoch's pocket.

20 Comments | Leave a Comment..

Posted on Techdirt - 16 June 2021 @ 1:47pm

FBI's Recovery Of Colonial Pipeline Bitcoin Ransom Highlights How The 'Ban Crypto To Stop Ransomware' Cries Were Wrong Again

from the that's-not-how-it-works dept

Last month we highlighted what seemed like a fairly silly Wall Street Journal op-ed arguing that banning cryptocurrency was the best way to stop ransomware, in response (mainly) to the well publicized ransomware attack on Colonial Pipeline, which resulted in the company shutting down the flow of oil while it sorted things out. As we pointed out, not only was the idea of banning cryptocurrency unworkable, it was unlikely to do much to stop ransomware. Unfortunately, it appears that a number of other cryptocurrency haters jumped on this moment to push the idea even further, claiming that "society has a Bitcoin problem."

Of course, part of the key narrative in all of these pieces is that cryptocurrency and Bitcoin in particular, somehow make it easier for criminals to "get away" with these kinds of ransom demands, highlighting that it is somewhat easier to move around large values of Bitcoin than cash. However, as we noted in our original piece, the idea that cryptocurrency allows criminals to "get away" seemed extremely overblown, as we've seen plenty of cases where criminals using cryptocurrency were caught. And, as if to put an exclamation point on all of this, soon after the huge moral panic, the FBI announced that it had recovered over half of the money Colonial Pipeline had paid.

And, as the FBI special agent's affidavit showed, this was done in part by tracking how the money flowed across the public ledger. The NY Times ran an article noting that the FBI's recovery of the money here "upends the idea that Bitcoin is untraceable." A bunch of long time Bitcoin/cryptocurrency followers scoffed at the NY Times article, because they've long known that Bitcoin's public ledger has always made it so that transactions are traceable. But it's actually important for people not deeply in the Bitcoin space to understand this as well. And the problem with so many of the "ransomware is really a cryptocurrency problem" articles, was that they implied otherwise -- that cryptocurrency was somehow totally and completely untraceable.

As the NY Times article explains, what's important here is that it demonstrates that for all the hand wringing about cryptocurrencies and ransomware, the reality is that law enforcement is evolving with the times, and using the same kind of law enforcement detective work it's supposed to use to solve crimes.

Yet for the growing community of cryptocurrency enthusiasts and investors, the fact that federal investigators had tracked the ransom as it moved through at least 23 different electronic accounts belonging to DarkSide, the hacking collective, before accessing one account showed that law enforcement was growing along with the industry.

That’s because the same properties that make cryptocurrencies attractive to cybercriminals — the ability to transfer money instantaneously without a bank’s permission — can be leveraged by law enforcement to track and seize criminals’ funds at the speed of the internet.

That's an important point and one that often gets lost in the FUD surrounding new technologies (such as encryption) that might make law enforcement's job slightly more complex in the short run. But, at the same time, law enforcement needs to learn to adapt, not by undermining these technologies, but understanding how they work, and understanding how to do the actual legwork to trace those abusing the technology for criminal purposes.

So rather than jumping to the conclusion that we need to ban this or that technology because it makes it slightly more challenging for law enforcement, this is actually an example showing how if law enforcement does their job properly, the technology is not the problem.

Read More | 31 Comments | Leave a Comment..

Posted on Techdirt - 15 June 2021 @ 9:37am

If David Cicilline Gets His Way; It Would Destroy Content Moderation

from the consequences dept

Last week we looked at the various antitrust bills written by House Democrats (though with Republicans co-sponsors conjured up at the last minute with an assist from Rupert Murdoch), and noted that none of them seemed likely to really solve the problems of internet consolidation. The crown jewel bill comes from Rep. David Cicilline, who is spearheading this entire antitrust effort. We discussed some of the problems with his bill last week, but a closer reading suggests that it would also create a disaster for content moderation. The bill reads:

It shall be unlawful for a person operating a covered platform, in or affecting commerce, to engage in any conduct in connection with the operation of the covered platform that—

(1) advantages the covered platform operator’s own products, services, or lines of business over those of another business user;
(2) excludes or disadvantages the products, services, or lines of business of another business user relative to the covered platform operator’s own products, services, or lines of business; or
(3) discriminates among similarly situated business users.

This language is clearly designed to target things like Google offering its own local reviews and listings rather than Yelp's or TripAdvisor's. And there are reasonable arguments to be made that a company like Google maybe should just use its own search ranking algorithm to see whether or not users prefer those 3rd party listings to its own.

But... the overly broad language in the Cicilline bill seems likely to have massive unintended consequences regarding content moderation in ways I don't think Cicilline would support. Indeed, for unclear reasons, an early draft of Cicilline's bill had more limiting language on part (3) above, such that it only covered "material" discrimination over services involving "the sale or provision of products or services." But the final language is much more broad and says it's an antitrust violation if there's "discrimination among similarly situated business users."

But here's the thing that people who have no experience with content moderation never seem to realize: everyone who is on the receiving end of a moderation decision they disagree with, insists that they are being treated unfairly compared to some other "similarly situated" user, even if the reality (and context) suggest otherwise. But by saying that it's an antitrust violation to discriminate between "similarly situated" business users, that's going to make those claims become particularly legally fraught.

That's going to open up a massive loophole regarding content moderation. Let's take a few examples, starting with Parler. As you may recall, Parler was kicked off AWS for hosting, and also kicked out of both the Google Play Store and the Apple iOS App Store (though it has since returned to the App Store).

Parler sued Amazon, claiming it was an antitrust violation, which got laughed out of court. But, if Cicilline's bill becomes law, suddenly this becomes an open question again. Parler could easily argue that the removal was discrimination under the definition of the bill. After all, a key point in Parler's lawsuit was that Amazon treated Twitter differently than it treated Parler.

And, under the definition in (3), Parler could say that Amazon discriminated against it as compared to the "similarly situation business user" Twitter.

This might not impact Parler's lawsuit specifically, since enforcement of Cicilline's bill falls on government entities rather than private parties, but it opens it up to "any Attorney General of a state," and I can pretty much guarantee that there are a bunch of state AGs who would happily step in and claim that these moderation efforts against Parler violated the law.

But it goes even further than that. Suddenly Twitter banning Project Veritas or Facebook shutting down events created by Infowars would raise the same questions. And all they'd need to do is find a friendly state AG to take them on.

In short, this antitrust bill would open up a huge loophole for propaganda or garbage fire websites that were banned (or even just diminished) to claim it was an antitrust violation, because they were treated differently than "similarly situated business users."

Just think of how the PragerU lawsuits against YouTube would appear very different under this bill as well.

It seems odd that a Democrat like David Cicilline would want to put in place an antitrust bill that would make it open season for Republican propaganda outfits, and their supportive AGs, to force social media companies to not just host, but to promote, their content (not doing so might be seen as "discrimination" compared to similarly situated websites), but it seems like that's what he's done. Perhaps that's the compromise that it took to get a Republican co-sponsor on board, but it's hard to see how this is a worthwhile trade-off.

Read More | 124 Comments | Leave a Comment..

Posted on Techdirt - 14 June 2021 @ 10:52am

Hypocrisy: Rupert Murdoch Has Always Hated Antitrust; But Now He Wants It Used Against Internet Companies Who Out Innovated Him

from the the-cronyiest-of-capitalists dept

It's no secret that Rupert Murdoch is an extreme hypocrite. He spent decades railing against any kind of regulatory powers to hold back companies, but as soon as his own attempts to build an internet empire flopped dramatically, he's come around to being a major booster of regulatory crackdowns. Just only against the companies who out-innovated him. For years now he's been demanding that governments force the internet companies to pay him money -- a move that has been successful in his home country of Australia.

The latest is that Murdoch, who built his business empire by buying up competitors and doing everything possible to avoid antitrust authorities, is now a major force behind supporting antitrust efforts -- so long as they're aimed at the internet companies. When the Democrats released their 5 antitrust proposals last week, each one (perhaps somewhat surprisingly) had a Republican co-sponsor. That appears to have been thanks to Murdoch:

Fox Corp. and News Corp. lobbyists have been urging GOP members to support the bills this week, according to people familiar with the efforts, with two sources saying there could be as many as 3 to 4 GOP co-sponsors on each bill. Talks are ongoing....

Say what you want about Rupert Murdoch, but the idea that he's a free marketer and against regulations is nonsense. That was only true when the regulations involved his companies. Now that he's failed to innovate, he's spent the last decade demanding that governments punish the companies who actually competed better than he did. He's the cronyist of the crony capitalists.

18 Comments | Leave a Comment..

Posted on Techdirt - 11 June 2021 @ 3:30pm

Will Congress' Big New Push On Antitrust Actually Solve Any Competition Issues?

from the probably-not dept

On Friday, as has been widely expected for a while, a bunch of House lawmakers led by David Cicilline introduced five new antitrust bills that would, if they become law, completely reshape how antitrust works in the US. At least for tech companies. Somewhat notably, many of the bills seem written specifically to target just one industry and to avoid having to deal with other industries. The text of the bills has been floating around all week as the Democrats who are pushing them hoped to find some Republican co-sponsors. And, based on Friday's press release, it appears they found at least one Republican to sponsor each bill (though only four Republicans in total, as they got Lance Gooden to agree to sponsor two of the bills).

Now, most of the bills strike me as extremely problematic -- and even me just saying so will lead people to claim I'm somehow in the tank for these companies. Nothing is further from the truth. I'm all for creative ideas on how to end the dominance of the largest companies and to increase competition. But I fear poorly thought out proposals will have massive unintended consequences that go way beyond punishing Facebook, Google and Amazon.

Each bill does something different, and there are some occasionally creative and interesting ideas in them, but it really seems like these bills are more designed to destroy the thriving tech industry out of spite, rather than to actually encourage competition. As noted above, I'm in agreement that it would be good if we got more competition in the tech industry, but these bills take a very backwards-looking view on how to do that, basically by punishing companies for building successful products, rather than looking for ways to enable more actual competition. I've written before on ways to actually break up the dominance of big tech players, mainly by getting rid of many of the existing rules that have allowed the big players to block and limit competition. But these bills don't do that. They take a much more punitive approach to successful companies, rather than an approach that enables more competition through innovation. That's disappointing.

To me, the one that seemed most interesting at a first glance was the ACCESS Act ("Augmenting Compatibility and Competition by Enabling Service Switching Act") by Rep. Mary Gay Scanlon. It basically requires "covered platforms" to maintain open APIs for interoperability and data portability. And, at a first pass, that is a good thing, and obviously quite consistent with my belief that we need to build a future that is based more on open protocols rather than silo platforms. Portability and interoperability are certainly a step in the right direction for that.

However, the way the bill actually is written suggests a real lack of futuristic technical thinking. It would lock in certain ideas that don't necessarily make any sense. Basically, all this bill would actually do is make sure that you could transfer your data out of an existing internet giant. The big internet companies already do this... and because of the way it's been implemented, it's almost entirely useless and doesn't help anyone. This bill wouldn't change that, unfortunately.

On top of that, this bill fails to deal with the very real and very tricky challenges regarding data portability and interoperability as it pertains to privacy. Instead, the bill just handwaves it away, basically saying "don't do bad stuff regarding privacy" with this data. That's... not going to work, and is more or less an admission that the drafters of the bill don't want to deal with the very significant challenges of crafting a data portability/interoperability setup that is also congruent with protecting privacy.

The real way to do this would be to separate out the data layer so that it's not controlled by the centralized companies at all, but in the hands of the end-users or their agents. But while that could happen as an accident of this bill, it's clearly not the intent. Thus it seems like this bill would not help very much, and that's a real missed opportunity. It's nice that it recognizes portability and interoperability as issues, but it doesn't do the hard work necessary to make that actually meaningful.

Finally, perhaps the most problematic (by far) part of this bill is that if a "covered company" wants to change its APIs, it would need to get FTC approval -- and that seems like a terrible idea. Imagine having to get approval from the government every time you change your API? What? No. Bad.

A covered platform may make a change that may affect its interoperability interface by petitioning the Commission to approve a proposed change. The Commission shall allow the change if, after consulting the relevant technical committee the Commission concludes that the change is not being made with the purpose or effect of unreasonably denying access or undermining interoperability for competing businesses or potential competing businesses.

I mean, yikes. That's going from permissionless innovation -- the very core of our innovation engine -- to having the FTC act as the approver of any slight change to an API. That's really, really bad.

The bill that may get the most attention is Cicilline's own bill that basically says successful internet companies could no longer promote their own ancillary services over those of competitors. Basically, Google couldn't insert its own local results, or its own maps, over a third party's. Think of this as the Yelp Finally Forces Google To Use Yelp's Listings Act, because that's the main driver behind this bill. Basically, some companies that do more specialized search and content don't want Google to be able to compete with them, and more or less want traffic they might not have earned. I can see a slight argument for how the practice of actual monopolies favoring their own services and excluding others could be anticompetitive, but this bill would make it defacto anti-competitive -- and that seems likely to create massive unintended consequences that won't be very good for the internet.

There are, after all, lots of cases where it makes quite a lot of sense for companies to link their ancillary products. Yet, here, doing so will almost definitively lead to a costly antitrust fight, meaning that it will be quite difficult for many companies to build useful complementary services. I don't see how that benefits the public. Again, it seems that a much better solution would be to remove the barriers that currently limit the ability for third party competitors to step in and build tools that interoperate with the bigger players, but that's not the goal here. The goal seems to be to restrict the big internet companies to much more limited offerings, rather than providing a wider suite of services.

Another major change comes from Rep. Hakeem Jeffries, and would effectively make it much, much harder for internet giants to buy companies. A key part of the bill is that the acquiring company would have to affirmatively show that the merger is legit, rather than the government having to show that the merger is problematic. Shifting the burden of proof would basically mean that most such mergers would be presumed unlawful, rather than the opposite. This could have huge and problematic implications for how our economy operates today.

On the good side, the bill would give the FTC and DOJ more resources to review acquisitions. However, as we've discussed before, in trying to block out anti-competitive acquisitions (which are a legitimate concern!) a bill this broad will almost certainly knock out other kinds of important and useful acquisitions (such as ones that keep failing or flailing services alive). More importantly it may take investment capital away from competitive entrepreneurial ventures.

No good investors invest in a company with a plan to just sell it off to a big tech company (indeed, most investors will ask startups how they would deal with such a competitive threat), but having the big guys act as a buyer is an alternative out -- not as successful as succeeding on your own, but still better than losing all of the investment entirely -- makes it easier for the investors to make these kinds of bets. Now that possibility of return will become much more difficult, meaning that investment capital is less likely to go to entrepreneurs trying to create competitive solutions. And that's not good!

A separate bill from Rep. Neguse basically just raises the cost of mergers and acquisitions and... um... sure? Fine. I don't see that as problematic really. I mean, at the margins, making it more costly to do an acquisition might be a nuisance, but the changes and increases don't seem particularly significant here -- and certainly not enough to stop a major acquisition (though, arguably it might drive down the amount that the owners of the acquired company get, effectively transferring it to the government). Consider it kind of a slight tax on selling your business. The bill would also increase funding to the FTC and DOJ to work on antitrust issues, and that seems reasonable as well.

Finally, there's Rep. Jayapal's bill that is pretty clearly designed to just stop Amazon from selling its own goods on Amazon. I know this is an issue lots of people complain about, but it remains unclear to me how much of an actual problem it is. Lots of retailers sell house branded products and compete against others without much of a problem. Costco has its house Kirkland brand, which it sells alongside other companies' competing products. Is that so problematic?

As some are pointing out already, these bills could kill off (or severely limit) a bunch of services that people actually like, mostly as punishment that the innovations have been so successful. And that's a problem.

It's fine to admit that there's a delicate balance here. How do you stop companies from becoming too powerful such that they alone squeeze out or stifle competition, while at the same time not putting in place stringent rules that, by themselves, stifle useful innovations? There really are two major themes of approaches: (1) punish or limit the ability of companies to act or (2) figure out better ways to create incentives for competitors to succeed. Unfortunately, regulators tend to jump to (1) and avoid even trying (2). That seems to be the case here.

Read More | 40 Comments | Leave a Comment..

Posted on Techdirt - 11 June 2021 @ 10:52am

Senator Wicker Introduces Bill To Guarantee The Internet Sucks

from the you-did-what-now? dept

Why does Senator Roger Wicker from Mississippi hate the internet? Wicker, who has a close relationship with big telcos, who have long made it their mission to destroy the open internet, was already a co-sponsor of an awful "Section 230 reform" bill last session, and is back now with what he's ridiculously calling the "PRO-SPEECH" Act. It stands for "Promoting Rights and Online Speech Protections to Ensure Every Consumer is Heard Act." But, in reality, it is a blatant (and unconstitutional) attack on free speech.

The bill more or less bans any website from doing any moderation. The key part:

An internet platform may not engage in a practice that does any of the following:

(1) Blocks or otherwise prevents a user or entity from accessing any lawful content, application, service, or device that does not interfere with the internet platform's functionality or pose a data privacy or data security risk to a user.

(2) Degrades or impairs the access of a user or entity to lawful internet traffic on the basis of content, application, service, or use of a device that does not interfere with the internet platform's functionality or pose a data privacy or data security risk to a user.

Consider it the all porn and all spam allowed act! Kind of ironic for a Senator who once pushed an unconstitutional ban on selling video games to children. Under this bill, sites couldn't even stop kids from accessing or playing violent or pornographic video games.

There are two exceptions, both of which are silly. One is for "small internet platforms." And the other is... wait for it... if you declare yourself a "publisher" then it no longer applies. Yes, that's right. Senator Wicker is trying to make the ridiculous and nonsensical "publisher/platform" distinction an actual thing, despite the fact that this is blatantly unconstitutional.

Let's just remind everyone how this works: the 1st Amendment includes both the right for any website hosting content to make editorial decisions about what it will and won't include, as well as a right of association to say "I don't want to be associated with that stuff." In this setup, where a site has to declare itself a platform or a publisher, that effectively means taking away the 1st Amendment rights of a platform and turning into a garbage dump of spam and porn. Or... it has to declare itself a "publisher" at which point it faces liability for everything that shows up.

The end result is that this bill leans into the moderator's dilemma and creates two types of internet sites: complete garbage dumps of spam/abuse/porn/harassment where no moderation can take place, and Hollywood-backed squeaky clean productions. It wipes out the parts of the internet that most people actually like: the lightly moderated/curated user-generated aspects of social media that enable lots of people to have a voice and to connect with others, without being driven away by spammers, assholes, and abusers.

It also throws in this tidbit to make it clear Wicker doesn't want social media sites to kick Nazis off their platforms any more:

An internet platform may not take any action against a user or entity based on racial, sexual, religious, political affiliation, or ethnic grounds.

Thing is, discrimination on racial, sexual, religious, and ethnic grounds is already covered under civil rights laws -- and they're protected classes because they're mostly things inherent to someone, and not choices they make. Your political views and affiliation are different. And, the fact is, there are almost no sites out there (despite what ignorant people are screaming) that do any moderation based on political affiliation. Or, if they do, it's to literally ban the American Nazi Party. But under Wicker's bill, you couldn't ban the American Nazi Party or its members any more.

I wonder why he wants that?

Then there's the "I'm protecting Parler" part of the bill. It says this would be a presumed method of "unfair competition."

Any action taken by a larger internet platform that wholly blocks or prohibits an internet platform that competes with the large internet platform (or any affiliate of the large internet platform) from making use of the large internet platform.

So, this would mean that a platform like Parler could violate every policy it wants of companies like Amazon, Google, and Apple, and they would not be allowed to kick it off for any of those policy violations.

There are also onerous transparency requirements based on the false idea that there is a clear set of rules that every platform uses, rather than an ever-changing and evolving set of policies that is constantly dealing with edge cases.

The whole thing is a stupid wishlist of whiny fake conservatives who want to play the victim and claim they're oppressed for the culture war they're waging. But the end result would be wiping out all the important and useful parts of the internet, and dividing into two piles: all garbage all the time, or the Disney-fied, locked down part. No one should want that.

Which makes you wonder why Wicker does.

Read More | 142 Comments | Leave a Comment..

Posted on Techdirt - 11 June 2021 @ 6:25am

Music Publishers Sue Roblox In Full Frontal Assault On The DMCA

from the here's-a-big-one dept

A huge and potentially important copyright lawsuit was filed this week by basically all of the big music publishers against the immensely popular kids' gaming platform Roblox. Although the publishers trade association, the NMPA, put out a press release claiming the lawsuit, it doesn't appear that NMPA is actually a party. The lawsuit is, in many ways, yet another full frontal assault on the DMCA's safe harbors by the legacy music industry. There's a lot in this lawsuit and no single article is going to cover it all, but we'll hit on a few high points.

First, this may seem like a minor point, but I do wonder if it will become important: buried in the massive filing, the publishers mention that Roblox did not have a registered DMCA agent. That seems absolutely shocking, and potentially an astoundingly stupid oversight by Roblox. And there's at least some evidence that it's true. Looking now, Roblox does have a registration, but it looks like it was made on... June 9, the day the lawsuit was filed.

Wow. Now, that may seem embarrassing, but it might actually be more embarrassing for the Copyright Office and raise a significant and important legal question. Because it appears that Roblox did at one time have a DMCA agent registration but, as you may recall, back in 2016, the Copyright Office unilaterally decided to throw out all of those registrations and force everyone to renew (and then to renew again every three years through a convoluted and broken process).

There's an argument to be made that the Copyright Office can't actually do this. The law itself just says you need to provide the Copyright Office with the information, not that it needs to be renewed. The Copyright Office just made up that part. Perhaps we finally have a test case on our hands to see whether or not the Copyright Office fucked up in dumping everyone's registration.

Still, that's a minor point in the larger lawsuit. The publishers throw a lot of theories against the wall, hoping some will stick. It seems like most should be rejected under the DMCA's safe harbors, because it truly is user generated content, even if the lawsuit tries a variety of approaches to get around that. Part of the lawsuit argues contributory and vicarious copyright infringement, more or less pulling the "inducement" theory from the Grokster ruling, which basically says that if you as a company encourage your users to infringe, you could still be liable (this is, notably, nowhere in the actual law -- it's just what the Supreme Court decided).

But to get there, the lawyers for the music publishers seem to want to take a Roblox executive's comments completely out of context, in a somewhat astounding manner. The "proof" that Roblox is encouraging people to infringe is here:

Roblox is well aware that its platform is built and thrives on the availability of copyrighted music. As Jon Vlassopulos, Roblox’s global head of music, publicly stated just last year: “We want developers to have great music to build games. We want the music to be, not production music, but really great [commercial] music.” (Alteration in original). To that end, Roblox actively encourages its users to upload audio files containing copyrighted music and incorporate them into game content on the Roblox platform. Roblox advertises the importance of music in games and makes it easy for users to upload, share, and stream full-length songs.

But... if you read the article that they're using for that Vlassopulos quote, it's not directed at developers and users of their platform. It's targeted at musicians and the music industry. The whole point of the quote is to let musicians and the industry know that Roblox is open to licensing deals. It's pretty obnoxious to try to spin that as encouraging people to infringe when, in context, it sure looks like the exact opposite. I mean, literally the next sentence (which doesn't make it into the lawsuit) is about how they're "testing the waters" by making a deal with a small indie label to make all of its music available on Roblox.

So it seems to be Roblox saying the exact opposite of what the publishers are claiming. That's... kinda fucked up.

The lawsuit also tries to spin the impossible task of trying to moderate as proof that any failures in moderation are deliberate.

There is no question that Roblox has the right and ability to stop or limit the infringement on its platform. But Roblox refuses to do so, so that it can continue to reap huge profits from the availability of unlicensed music. While Roblox touts itself as a platform for “user-generated” content, in reality, it is Roblox—not users—that consciously selects what content appears on its platform. Roblox is highly selective about what content it publishes, employing over a thousand human moderators to extensively pre-screen and review each and every audio file uploaded. Roblox’s intimate review process includes review of every piece of copyrighted music, generally identified by title and artist—to ensure that it meets Roblox’s stringent and detailed content guidelines and community rules. This process ensures that Roblox plays an integral role in monitoring and regulating the online behavior of its young users.

Roblox thus unquestionably exercises substantial influence over its users and the content on its platform, ostensibly in the name of “safety.” Yet Roblox allows a prodigious level of infringing material through its gates, purposely turning a blind eye for the sake of profits. Rather than take responsibility, Roblox absurdly attempts to pass the obligation to its users—many of whom are young children—to represent to Roblox that they own the copyrights to the works they have uploaded.

Coincidentally, just last week we published our content moderation case study on Roblox, focused on how it tries to stop "adult" content on the platform. We noted that the company is very aggressive and hands-on with its moderation efforts but (importantly) it still makes mistakes, because every content moderation system at scale will make mistakes.

So just because Roblox is aggressive in its moderation, and even if it says it reviews everything, that doesn't mean that it "refuses" to stop infringement. It just means it doesn't catch it all. Indeed, the company has said in the past that it uses an automated third party monitoring tool to try to catch unauthorized songs (though, notably, this lawsuit is about the publishing rights, not the recording rights, so arguably a monitoring tool might catch some sound recordings while missing other songs that implicate songwriters/publishers -- but that's getting super deep in the weeds).

Indeed, the impossibility of catching everything -- while still encouraging websites to try -- is why we want things like Section 512 of the DMCA or Section 230 of the CDA. If you suddenly make websites liable for any mistakes they let through, then you create a huge problem. And claiming that their aggressive moderation implicates them even more only encourages sites to do less moderation in the long run.

But, the publishers don't care about that. Their end goal is clear: as in the EU, they want to force every website to have to buy a blanket license for music. They basically want to do away with the DMCA altogether, then just sit back and collect payments. They want to change the internet almost entirely from a tool for end users to a cash register for music publishers.

There are some other oddities in the lawsuit. It repeatedly tries to claim that Roblox is liable for direct infringement itself, but that theory seems like a stretch. Even the filings admit that the music is all uploaded by users:

Despite Roblox’s written policies, users regularly upload files containing copyrighted music. The act of “uploading” a file to Roblox involves the user making a copy of the file and distributing it to Roblox, where it is then hosted on Roblox’s servers.

To upload an audio file, a user simply opens the Roblox Studio and clicks on a tab marked “Audio,” which then prompts the user to choose a file on their local hard drive, in either .mp3 or .ogg format to be copied and distributed to Roblox’s servers.

It tries to build out the inducement theory by saying that because Roblox encourages developers to use music in their games, and this is the same as encouraging infringement, but that's nonsense. Nothing in what Roblox says encourages infringement. They're just saying that sound and music can enhance a game. Which is clearly true.

Roblox makes the process of uploading infringing music extremely easy for users. Roblox even published an article designed to encourage developers to add music to their games, which explains: “While building a game, it’s easy to overlook the importance of sounds and music.” (Emphasis added).4 That page gives users step-by-step instructions on how to copy and distribute their music files to the Roblox platform.

So what? That's not telling users to infringe. If anything, it's saying "find some music you're able to add to this legally." You'd think that publishers would be happy about that, as it opens up a new line of business where they could license their music, which is what the Roblox exec was talking about at the beginning. But leave it to the greedy publishers to not want to do the hard work here, and instead try to force a big company into a big payment.

Roblox has already put out a statement saying (not surprisingly) that it's "surprised and disappointed" by the lawsuit. It seems likely that it will mount an aggressive defense, and it could be yet another important case in seeing whether or not the legacy music industry is able to chip away at another important aspect of the DMCA, and to force all websites that host third party content to buy blanket licenses.

“As a platform powered by a community of creators, we are passionate about protecting intellectual property rights – from independent artists and songwriters, to music labels and publishers – and require all Roblox community members to abide by our Community Rules,” said the statement.

“We do not tolerate copyright infringement, which is why we use industry-leading, advanced filtering technology to detect and prohibit unauthorised recordings. We expeditiously respond to any valid Digital Millennium Copyright Act (DMCA) request by removing any infringing content and, in accordance with our stringent repeat infringer policy, taking action against anyone violating our rules.”

“We are surprised and disappointed by this lawsuit which represents a fundamental misunderstanding of how the Roblox platform operates, and will defend Roblox vigorously as we work to achieve a fair resolution,” continued Roblox’s statement.

Of course, this is par for the course for the legacy industry -- especially the publishers as lead by the NMPA's David Israelite. They wait for various internet services to get popular, and then rather than figuring out how that helps them, they sue. It's how they constantly kill the golden goose. They've done it with various internet music services, music games, and more. They're currently trying to do it with Twitch and now Roblox as well. They overvalue the music component, and choke off the long term business prospects for these platforms, many of which have music as an ancillary add-on.

It's silly, short-sighted, and anti-culture. In other words, it's the legacy music industry's usual playbook.

Read More | 46 Comments | Leave a Comment..

Posted on Techdirt - 10 June 2021 @ 2:04pm

Instagram's Big Experiment With De-Prioritizing 'Likes' Fizzles As Some People Apparently Really Like 'Likes'

from the ah-well-nevertheless dept

Back in the fall of 2019, we wrote about how Instagram was experimenting with hiding "likes" from US users, to try to cut down on the awkward incentives it created -- such as people obsessing over who and how many people liked the pictures they posted. It was an interesting move, and we appreciated the willingness to experiment with making sure the platform wasn't just encouraging socially problematic behavior. However, now the company has announced that some people really got upset without their likes.

What we heard from people and experts was that not seeing like counts was beneficial for some, and annoying to others, particularly because people use like counts to get a sense for what’s trending or popular....

I mean... that seems obvious? It's not clear why you needed to run a test to find that out. And wasn't part of the point of the experiment to move away from people obsessing over "trending" or "popular" content? Either way, Instagram's solution is to pass the decision on to users.

So... now it's all optional. Perhaps that's better than forcing it on everyone, but it is interesting:

Starting today, we’re giving you the option to hide like counts on all posts in your feed. You’ll also have the option to hide like counts on your own posts, so others can’t see how many likes your posts get. This way, if you like, you can focus on the photos and videos being shared, instead of how many likes posts get.

To some extent this is great. Having more options is good and giving more powers to the end users is obviously good. But it sure sounds like the defaults will still be to include likes, and that means the vast, vast majority of users will still have them.

Casey Newton has some more details about what happened, and says that despite Instagram boss Adam Mosseri really thinking this was going to be a big deal, it turned out that people just didn't really care one way or the other.

“It turned out that it didn't actually change nearly as much about … how people felt, or how much they used the experience as we thought it would,” Mosseri said in a briefing with reporters this week. “But it did end up being pretty polarizing. Some people really liked it, and some people really didn't.”

As Casey notes, this seems to go along with a lot of recent research suggesting that some of the early panic about how "the kids these days" were all obsessing about likes and trends and what not... turns out to not really be that true. It certainly has a lot of the hallmarks about a moral panic involving a new technology and "the kids these days."

Casey concludes that the unexpected lesson from the varied response to this experiment is really that users want more control over their own experience, which is certainly a message some of us have been banging the drum on that point for years. Of course, it remains to be seen if Instagram (and Facebook) bakes that lesson into other parts of the platform going forward...

Leave a Comment..

More posts from Mike Masnick >>


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it