GME3: Scam-blers Anonymous, School of Hard Hacks & Claude and Effect

From fake casino ads flooding social media to student data breaches and AI copyright battles, this week’s headlines highlight the growing risks at the intersection of tech, trust, and regulation. Fraudsters are hijacking Canadian casino brands to run scam ads on Meta platforms, a major school software provider just paid ransom after exposing sensitive information, and authors are taking the AI company, Anthropic, to court for allegedly scraping millions of pirated books. Read the full stories here!

 

Gambling

Scam-blers Anonymous

 

Fake online casino ads are flooding Canadian social media feeds, and the scams are getting bolder. A new article from Canadian Gaming Business breaks down the scope of the problem, with insights from regulators, operators, and industry leaders.

 

Fraudsters are continuing to buy ad space – mostly on Facebook and Instagram – and dressing it up with hijacked logos, names, and even doctored news reports from trusted properties like Casino Rama, River Cree, and Casino Regina. These ads lure users to offshore gambling sites and trick them into handing over personal or financial information.

 

Paul Burns of the Canadian Gaming Association says “virtually every land-based casino brand in the country” has been targeted. Great Canadian Entertainment, which operates over 20 casinos, now maintains a running list of fraudulent ads using its brands. VP Chuck Keeling calls it “a game of Whac-A-Mole,” as new scams pop up as fast as others are taken down.

 

Despite efforts from regulators in B.C., Saskatchewan, and Ontario – and some recent policy changes by Meta – the problem persists. Many Canadians don’t know how to spot a legitimate ad, and enforcement alone isn’t enough to stop the spread. As a result, consumers risk financial harm, while licensed operators face reputational fallout.

 

Media

School of Hard Hacks

 

Canada’s federal privacy commissioner has ended its investigation into the PowerSchool data breach, citing satisfaction with the company’s response and its commitment to bolstering cybersecurity. The U.S.-based firm, which provides student information systems used across several Canadian provinces and territories, experienced a cyberattack in December 2024 that exposed sensitive data – including names, contact info, birth dates, and in some cases medical records and Social Insurance Numbers – of students, educators, and parents.

 

PowerSchool took steps to contain the breach, notified affected parties, offered credit protection, and committed to further actions such as improved monitoring and detection tools. In response, Privacy Commissioner Philippe Dufresne discontinued the federal investigation, though his office will continue to monitor the company’s implementation of promised measures.

 

However, separate investigations by provincial privacy watchdogs in Ontario and Alberta are still ongoing. In May, the Toronto District School Board revealed that the stolen data had not been destroyed and that a ransom demand had been made. PowerSchool confirmed it paid the ransom, saying it believed doing so was in the best interest of the affected communities.

 

The company has pledged to confirm additional forensic and security steps by the end of July, provide proof of enhanced monitoring tools by year-end, and submit an independent security assessment by March 2026.

 

Entertainment

Claude and Effect

 

A U.S. federal judge has allowed a major copyright lawsuit against AI startup Anthropic to proceed as a nationwide class action. On Thursday, Judge William Alsup ruled that three authors, Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, can represent all U.S. writers whose books were allegedly scraped from pirate sites LibGen and PiLiMi to train Anthropic’s AI systems.

 

The authors claim Anthropic, backed by Amazon and Alphabet, downloaded up to 7 million books between 2021 and 2022 without permission or compensation, storing them in what Alsup described as a “central library of all the books in the world.” While the judge previously found that training AI might qualify as fair use, he emphasized that the act of downloading and retaining pirated books itself may still constitute copyright infringement, exposing the company to potential damages in the billions if the case succeeds.

 

Anthropic pushed back, saying the court had underestimated the complexity of verifying copyright ownership for millions of works. The judge rejected that argument, finding the class action route appropriate given the scale of the alleged infringement.

 

This case adds to a growing wave of lawsuits targeting AI companies like OpenAI, Microsoft, and Meta over their use of copyrighted materials. The outcome could shape how generative AI developers handle content rights going forward.

GME Law is Jack Tadman, Lindsay Anderson, and Will Sarwer-Foner Androsoff. Jack’s practice has focused exclusively on gaming law since he was an articling student in 2010, acting for the usual players in the gaming and quasi-gaming space. Lindsay brings her experience as a negotiator and contracts attorney, specializing in commercial technology, SaaS services, and data privacy. 

 

At our firm, we are enthusiastic about aiding players in the gaming space, including sports leagues, media companies, advertisers, and more. Our specialized knowledge in these industries allows us to provide tailored solutions to our clients’ unique legal needs. Reach out to us HERE or contact Jack directly at jack@gmelawyers.com if you want to learn more!

 

Check out some of our previous editions of the GME3 HERE and HERE, and be sure to follow us on LinkedIn to be notified of new posts, keep up to date with industry news, and more!

Recent Posts

Related Posts