Online Harms Bill Must Address Platform Liability And Provide For Swift Banning Of Platforms

Contrary to my previous objections to the Online Harms Bill, which I criticized as “too little too late nothingburger” and “disappointing” because age verification is missing, I am now finding new ways to work with this law to arrive precisely where we need to get regarding corporate criminal liability of platforms. Given that we don’t have the sociopathic section 230 CDA here, all we need is to be bold and move fast, before the law is struck on constitutional grounds by corporate lobbies.

The Online Harms Bill creates a very welcome tool to repress rampant tech facilitated crimes, by reversing the criminal law onus, in other words we can finally say that anyone who produces and disseminates harmful content is by definition guilty until proven otherwise.

Among many things, I see a clear possibility to raise criminal sentencing for child pornographers from nothing to perpetuity through the Online Harms Bill, simply by proving that juvenile porn is according to United Nation reports a most blatant instance of hate speech and antisocial behaviour. Interference with minors is absolutely encompassed in the current hate speech definition. Moreover, we have decades of studies and reports on the societal decay and breakdown resulting from technology facilitated violence (a.k.a. hate speech) against women and children.

My understanding is that we will be setting up administrative tribunals where you don’t need to be a member of a bar, you can be a social worker and hand out life-sentences. To accelerate trials and sentencing, we can also implement AI decision-makers like in the European Court. They seem to be doing pretty well so far.

We have extensive reports on the ways that platforms knowingly encourage and perpetuate hate speech, mainly in the form of tech facilitated violence. Honestly, I don’t see how user-generated and hardcore porn (and anything that is not LGBTQ+) will get a hate-speech exemption, given the Privacy Commissioner report (that stayed hidden for as long as it possibly could) specifically on how consent of unwitting “performers” is NEVER verified on Aylo. Even the new “safeguards” Aylo brought forward include the possibility to consent for somebody else by providing a release form. As if a user couldn’t produce a fake release. I had 9 remixes commercialized to my name and someone gave a release signed by someone pretending to be me to a US publisher, so Aylo’s efforts are total bullshit in that regard. The rest is voluntary blindness by pro-Aylo officials. This is just one example of organized inefficiency.

The Online Harms Bill should also allow victims from outside of Canada to file complaints. We learned from parliamentary sessions on the status of women that intimate partner violence victims are fleeing Canada, because the criminal justice system here intentionally compromises their safety by protecting and releasing violent criminals. We saw in these sessions that reps from the current administration were antagonizing and harassing victims (survivors left in tears), which shows that officials political interests are aligned with the rise of technology facilitated violence. It is our duty to take the Online Harms Bill and use it against all the corporations and their users these officials try to protect. It is a small sacrifice to stop speech temporarily (voluntarily remain silent or shut down or pause social media accounts) until we weed out the bad apples once and for all.

I am currently examining a report from 5 years ago, called Deplatforming Mysogyny on platform liability for technology facilitated violence, and will compare it with the efforts brought forward in the Online Harms Bill. The report explains how digital platforms business models, design decisions, and technological features optimize them for abusive speech and behaviour (the current definition of hate speech) by users and examine how tech violence always results in real life violence and harm. It is funny how we’ve known all these years that tech platforms are destroying society by encouraging violence and murders, but allowed them to stay in business.

As early as 2018, the Report of the Special Reporteur on violence against women, UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018) reports that “Information and communications technology is used directly as a tool for making digital threats and inciting gender-based violence, including threats of physical and sexual violence, rape, killing, unwanted and harassing online communications or even the encouragement of others to harm women physically. It may also involve the dissemination of reputation harming lies, electronic sabotage in the form of spam and milgnant viruses, impersonation of the victim online and the sending of abusive emails or spam, blog posts, tweets or other online communications in the victim’s name. Technology facilitated violence may also be committed in the work place or in the form of so-called honour-based violence by intimate partners […]

It is therefore important to acknowledge that the Internet is being used in broader environment of widespread and systemic structural discrimination and gender-based violence against women and girls, which frame their access to and use of the internet and other information and communications technology. Emerging forms of ICT have facilitated new types of gender-based violence and gender inequality in access to technologies, which hinder women’s and girls’ full enjoyment of their human rights and their ability to achieve gender equality. […] 

The consequences of harm caused by different manifestations of online violence are specifically gendered, given that women and girls suffer from particular stigma in the context of cultural inequality, discrimination, and patriarchy. Women subjected to online violence are often further victimized through harmful and negative gender stereotypes, which are prohibited by international law.”

If intentionally sexualizing individuals or a group of people in order to deprive them of the basic enjoyment of their human rights is not hate speech, good luck proving otherwise.

Tech facilitated gender based violence is further defined as being rooted in, arising from, and exacerbated by misogyny, sexist norms, and rape culture, all of which existed long before the internet. However TFGBV in turn accelerates, amplifies, aggravates, and perpetuates the enactment of and harm from these same values, norms and institutions, in a vicious circle of technosocial oppression. (Source Jessica West)

Deplatforming misogyny gives several examples of hate speech:

  • Online Abuse: verbally or emotionally abusing someone online, such as insulting and harassing them, their work, or their personality traits and capabilities, including telling that person she should commit suicide or deserves to be sexually assaulted
  • Online Harassment: persistently engaging with someone online in a way that is unwanted, often but not necessarily with the intention to cause distress or inconvenience to that person. It is perpetrated by one or several organized persons, as in gang stalking (source Suzie Dunn)
  • Slut-shaming (100% hate-speech) can be perpetrated across several platforms and may include references to the targeted person’s sexuality, sexualized insults, or shaming the person for their sexuality or for engaging in sexual activity. This type of hate-speech has the objective to create an intimidating, hostile, degrading, humiliating or offensive environment (UNHRC, 38th Sess, UN Doc A/HRC/38/47 (2018))
    • Discussing someone else’s sexuality is kind of always a red flag and criminal defense lawyers (among many other professionals) are totally engaging in hate speech in total impunity, just saying. Something needs to change or the legal industry should be completely eliminated from enforcing a clean internet. They should have zero immunity for perpetrating hate-speech and thereby encouraging violence against women and children.
  • Non-consensual distribution of intimate images: (see Aylo’s business model) circulating intimate or sexual images or recordings of someone without their consent, such as where a person is nude, partially clothed, or engaged in sexual activity, often with the purpose of shaming, stigmatizing or harming the victim. (also known as image based abuse and image-based sexual exploitation). The UN warns against using the term “revenge porn” because it implies that the victim did something wrong deserving of revenge.
  • Sextortion: attempting to sexually extort another person by capturing sexual or intimate images or recordings of them and threatening to distribute them without consent unless the targeted person pays the perpetrator, follows their orders, or engages in sexual activity with or for them.
  • Voyeurism: criminal offense involving surreptitiously observing or recording someone while they are in a situation that gives rise to a reasonable expectation of privacy.
  • Doxing: publicly disclosing someone’s personal information online, such as their full name, home adress, and social insurance number. Doxing is particularily concerning for individuals who are in or escaping situations of intimate partner violence, or who use pseudonyms due to living in repressive regimes or to avoid harmful discrimination for aspects of their identity, such as being a transgender or sex worker. (see: The Guardian: Facebook’s real name policy hurts people)
  • Impersonation: taking over a person’s social media accounts, or creating false social media accounts purporting to be the victim, usually to solicit sex or make compromising statements.
  • Identity and Image Manipulation, i.e. Deepfake videos: use of AI to produce videos of an individual saying something they did not say or did not do. In reality, video deepfakes are kind of fringe. The current AI applications are mainly focused on sexualizing and undressing women through unauthorized use of Instagram photos.
  • Online mobbing, or swarming: large numbers of people engaging in online harassment or online abuse against a single individual (Amber Herd comes to mind)
    • The Depp and Herd trial is an example of court-enabled hate-speech. The way Herd was cross-examined on television falls within the definition of incitement of violence against victims of intimate partner violence. This trial harmed the reputation of the profession beyond any repair and resulted in uncontrollable online mobbing.
  • Coordinated flagging and Brigading are cited in the report but I am not at all convinced that they are user-perpetrated. I believe that algorithmic conduct is 100% on the platforms. Users have zero control and liability in that regard. Nice try, but nope. If a survivor is taken down, I won’t let platforms get away with “users did it”. No way. Saying otherwise is pro-corporate propaganda.
  • Technology aggravated sexual assault: group assault which is filmed and posted online. Here is where the Online Harms Bill can be used to sentence perps to life in prison, something that can’t be achieved under the criminal code.
  • Luring for sexual exploitation: i.e. grooming through social media, or through fake online ads, in order to lure underage victims into offline forms of sexual exploitation, such as sex trafficking and child sexual abuse. Here is another instance of hate speech deserving of a life-sentence.

To be continued in another post: it is a long report (or to be more precise a bundle of legal and UN reports) and the bill is also a handful. I am only skimming the surface of the most prevalent forms of hate-speech which invariably equate to incitement of gender-based and intersectional genocide (see report on missing and murdered indigenous women and how it amounts to genocide). Just to say I can work with that bill. Bring it!


Law school messed too much with my head by convincing me that I care about human rights for violent criminals and procedural safeguards for perp corps. I never did. It feels good to be my dystopian self again.