Tagged: policy

Is There A Better Word Than “Balance” In The Copyright Debate?

Mike Masnick questions the word “balance” in the copyright debate:

I’ve long thought that balance is the wrong way to look at it. The purpose of copyright law is to incentivize the creation of new content, and thus the standard on which copyright law should be judged is one where the [benefits of the] creation of content is maximized. As such, there shouldn’t be a question of balance, because the ideal situation where content is maximized should make everyone better off. Talking about balance is figuring out how both sides should compromise to meet in the middle. Talking about maximizing content creation, on the other hand, is talking about ways to improve the marketplace of options for everyone.

He links to a paper by Abraham Drassinower of the U of T Law School arguing that balance is the wrong way to view copyright policy. “Balance” as a concept in copyright suggests that the law is designed to reward a content creator for their labour (the “sweat of the brow” argument), Drassinower argues, though Masnick has to tease out the main point: “Balance” falsely implies that this is a zero sum game, when “the goal of copyright should be maximizing the [benefits of the] creation of content overall, such that everyone is better off.

I’m sold. I tried to use this point at the Toronto Copyright Townhall and in my submission to the consultation.

But, if not balance, then what?

Words like “balance” are used often to make sure that the interests of the public aren’t forgotten in the face of copyright holders’ interests. I strongly support the group, Fair Copyright for Canada, but “fair” has similar problems to “balance.” What words might serve to include the public interest without suggesting a zero sum game? Mike described it as “maximizing [the benefits of] content creation,” but that seems more useful in explanation than at the sound bite stage.

What about “calibrate?” I notice that Mike used the word in a subsequent post on why morality isn’t relevant in copyright: “A properly calibrated system is one where there’s the greatest overall economic good and everyone has the greatest opportunity to benefit” (strongly related — if it’s an economic question rather than a moral one, rights holders interests are not necessarily opposed to the public interest). “Calibrate” seems like the most accurate word. It doesn’t directly conjure up the notion of the public interest, but it does so indirectly by suggesting an approach that’s about more than “protection.” But it’s too technical for a mainstream audience.

Is there a more accessible synonym for “calibrate?” Optimize? It works, but “optimizing copyright law” seems a bit too vague, and doesn’t really capture the non-zero sum game and the public interest. Thesaurus.com doesn’t help much either.

So what else? I’m not sure. I like “calibrate,” but it won’t work with all audiences. “Optimize” is nice to use in passing to reinforce the point, but it doesn’t introduce it. “Balance” and “fair” are still useful for drawing attention to the interests beyond that of rights holders, but I won’t offer those terms without a caveat or disclaimer.

Other suggestions?

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Comments (2)

What is Cyberbullying Anyways?

This post originally appeared on Techdirt.

We’ve been hearing a lot about “cyberbullying” lately. Cases like the Lori Drew incident have got politicians and teachers all over looking to pass vague new rules and laws (or twist existing ones) to punish behavior they feel is wrong. The problem is, no one really seems to be able to define the term, at least not in a way that really distinguishes it from simply being a jerk online, so it’s encouraging to see a paper from a vice president of Stetson University, Darby Dickerson, calling on educators to slow down and define cyberbullying before creating policies about it, though I’m not sure she gets to the heart of the issue. Dickerson observes that people have been using the term often and easily, without any real consensus on what it includes and what it doesn’t. In the absence of a generally accepted scholarly or legal definition, she calls on universities to take four steps before creating a cyberbullying policy:

  1. consider the types of activity that might be included within the term,
  2. consider the type of harm,
  3. consider the level of intent required by the offender,
  4. determine the extent that it will address off-campus conduct.

This is good advice and Dickerson does a pretty good job of outlining the concerns. She notes that conduct such as “cyberstalking” or “cyberthreats” might be included, while issues of fraud probably shouldn’t be, arguing that “not all misconduct that occurs online should be labelled as cyberbullying.” She cautions institutions to remember “free speech and related constitutional concerns.” She’s skeptical of extending the term to include simply being a jerk online, and he questions labeling students as cyberbullies who don’t display real malice or hostility. She also raises lots of important questions about what it means to be “off-campus” in cyberspace. Dickerson concludes by urging institutions to clearly define the term before enacting policies, highlighting many important questions that must be answered first.

Yet… Dickerson ignores one major consideration: why have a separate policy for cyberbullying anyway? It seems to me that in order to consider these issues sanely, we need to stop pretending they’re separate things simply because we apply a “cyber” prefix to them. What’s a “cyberthreat?” How is that different from a threat in general? Is a “cyberthreat” just a threat made online? What if it’s made with a cell phone instead? What about a plain old telephone? Yes, the medium must be considered (“you’re going to die” is different when shouted in a playground than written in letters cut out of a magazine…), but do we create separate terms or policies for each medium? We do often need to re-examine our laws and policies in the face of new technologies, but it rarely makes sense to have separate “cyberpolicies” instead of ensuring that existing policies are adapted to handle the new technologies. Why not ensure that existing harassment policies cover real harassment that occurs online instead of creating a new “cyberharassment” policy? Without a consideration of the difference between cyberbullying and bullying in general at the heart of this discussion, people run the risk of spending their energy blaming the technology and grandstanding, creating new policies with troubling unintended consequences rather than addressing the real issue, which often may well just be plain old bullying in a new context. The new context can certainly present new challenges that might warrant policy changes, but people should be careful not to get distracted from the issue of bullying just because it has “cyber” tacked onto the front.

Read the comments on Techdirt.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

The Illusion That “Choice” Means That There’s Nothing To Fear From Code

Adam Thierer’s reaction essay appeared in the Cato Unbound debate on Friday, Code, Pessimism, and the Illusion of “Perfect Control.” He argues that the basis for Lessig’s pessimism in his book, Code, was his illusory belief that code provides a mechanism for “perfect control.” While he levies some strong criticisms of this position and argues that a regulatory alternative could be much worse, he seems to take an equally illusory position of optimism in the essay.

First, Thierer ignores all the bad stuff:

Not only are walled gardens dead, but just about every proprietary digital system is quickly cracked open and modified or challenged by open source and free-to-the-world Web 2.0 alternatives. How can this be the case if, as Lessig predicted, unregulated code creates a world of “perfect control”?

I’ve already agreed with Zittrain that “cracked open” isn’t good enough, but… did I miss the death of walled gardens? What about the iPhone app store, the Kindle approach and Facebook and the walled garden approach to social networking sites? I still believe there’s reason to be optimistic — open strategies tend to win out — but to ignore all of the latest walled gardens is to ignore several elephants in the room (that’s one crowded room…). This was disappointing as Thierer has provided a much more nuanced view at other times.

Second, Thierer has an awkward take on the difference between “open” and “closed” technologies:

Indeed, despite all this hand-wringing by the Lessigites, there exists a diverse spectrum of innovative digital alternatives from which to choose. Do you want wide-open, tinker-friendly devices, sites, or software? You got it. Do you want a more closed, simple, and safe online experience? You can have that, too. And there are plenty of choices in between. It sounds more like “perfect competition” than “perfect control” to me

This fallacy just grates on me. The spectrum of technologies Thierer presents has “tinker-friendly” and “safe and simpler” at opposite ends. Why don’t we demand both? WordPress defies this spectrum; a hosted blog at WordPress.com is safe and simple, but that code is available at WordPress.org for anyone to install and tinker with on their own servers. Few would disagree that Firefox is safe and simple, but it’s also “wide-open” free software with which anyone can tinker.

What bothers me about this spectrum is that Thierer implies — whether intentionally or not — that “tinker-friendly” means complicated and dangerous, while “closed” allows things to be safe and simple (because we all know how safe and simple Windows is…). There is no reason that technology needs to be “closed” in order for it be safe and simple. WordPress and Firefox are not compromises between freedom and ease-of use, but technologies that insist on both. Yes, it’s a challenge to coordinate freedom and simplicity, but these are not opposites, there is not an inverse relationship.

We should demand better from technologies which limit freedom. Demanding better isn’t simply choosing another product to avoid the chains yourself, but it also means helping your neighbour to do so as well. I’m not sure that this is the cyber-collectivism that Thierer ascribes to Lessig, as Zittrain’s argument for civic technologies takes a middle road between cyber-libertarianism and the “technocratic philosopher kings” Lessig is accused of suggesting, but it’s more than just saying that things are fine because we have some choice.

By making it seem like there’s nothing wrong and that closed systems go hand-in-hand with “safe and simple,” Thierer responds to Lessig’s illusions with an illusory picture of his own. He is right that code doesn’t provide “perfect control,” and that pessimism is unwarranted, but that doesn’t mean we shouldn’t recognize shortcomings and demand better from the makers of technologies on which we increasingly rely.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment

Unlocking An iPhone Is Not Freedom; Zittrain Argues For Civic Technologies

Cato Unbound has an outstanding online debate going on right now about Lawrence Lessig’s book Code and Other Laws of Cyberspace as it hits 10 years. Declan McCullagh started things off with a post entitled, “What Larry Didn’t Get,” offering a libertarian critique of Lessig’s approach and accusing him of favouring “technocratic philosopher kings.” Jonathan Zittrain has the latest post, “How To Get What We All Want,” which focuses on the similarities between McCullagh and Lessig and takes a middle ground between libertarianism and government regulation, arguing for civic technologies. Adam Theier has a post going up on Friday, and Lessig himself will have the last word on Monday. I highly suggest you check it out, if you’re at all interested in these issues and haven’t seen it already.

Now, I haven’t yet read Zittrain’s book, The Future of the Internet — And How To Stop It, but from the sorts of things I’ve read about it, I don’t think I share his pessimism. However, one line in his contribution to the debate really resonated with me. After talking about the dangers and limitations of proprietary technologies controlled by vendors (e.g. iPhone, Kindle, Facebook), he remarks:

This is the future of the Internet that I want to stop, and it’s small solace that geeks can avoid it for themselves if they can’t easily bring everyone else with them. [emphasis mine]

I get so frustrated when people rationalize the locked down nature of the iPhone by saying that they can just unlock it. Unlocking an iPhone is not freedom. (1) It still rewards Apple, the maker of the chains, through the purchase; (2) it’s a disservice to the vast majority of people who don’t have the skills to unlock their devices.

I strongly believe that if geeks want to do something useful to solve the problems that Lessig and Zittrain identify, it has to involve supporting free (libre) technologies that don’t have any chains, instead of just buying into proprietary technologies and removing their own chains.

The counterargument to Zittrain’s thesis isn’t a jailbroken iPhone; it’s an OpenMoko Freerunner.

This is why Zittrain holds up Wikipedia as an example of a civic technology; he notes the fact that Wikipedia is licensed freely. Free culture and free software are what produce civic technologies.

I don’t share his pessimism, but I sympathize with his argument for civic technologies.

Civic technologies seek to integrate a respect for individual freedom and action with the power of cooperation. Too often libertarians focus solely on personal freedoms rather than the serious responsibilities we can undertake together to help retain them, while others turn too soon to government regulation to preserve our values. I don’t think .gov and .com never work. I just think we too easily underestimate the possibilities of .org – the roles we can play as netizens rather than merely as voters or consumers.

Creative Commons Attribution-ShareAlike 4.0 International Permalink | Post a Comment