New research has found that 20% of Instagram posts about luxury fashion brands featured counterfeit and/or illicit products. The report calls for a comprehensive international strategy to fight the prevalence of fake goods online, but the usual roadblocks remain, meaning that the status quo will not change any time soon.
The study, Social Media and Luxury Goods Counterfeit: a growing concern for government, industry and consumers worldwide (available here via The Washington Post’s website and led by researcher Andrea Stroppa) examined 750,000 Instagram posts focused on top fashion brands and found that one-fifth featured counterfeit and/or illicit products. Amongst the posts scrutinised, when examining the use of hashtags, Chanel was the top targeted brand by infringers, followed by Prada, and Louis Vuitton.
The top ten brand hashtags used by infringers were:
- Chanel (13.90%)
- Prada (9.69%)
- Louis Vuitton (8.51%)
- Fendi (6.41%)
- Gucci (6.13%)
- Dior (5.96%)
- Celine (5.59%)
- Hermes (5.51%)
- Rayban / Oakley (4.90%)
- Bvlgari (4.49%)
Beyond the headline figure of 150,000 posts featuring counterfeit or illicit products, the research drilled down to look at how these posts are generated, with bots, algorithms and AI software being deployed to promote counterfeits to users. In the online space, such activity creates a clear challenge for the counsel charged with spearheading takedown efforts as the use of such technology allows infringers to publish simultaneously from multiple accounts, blitz social media with different (and ever-changing) hashtags, mask their identity and, crucially, run their operations 24/7. Positively for brand owners, the research suggests that the bots encountered in the Instagram research featured limited capabilities and most were unable to interact properly with users. However, as the report notes, “their general level of automation seems good enough for their main function: promoting an illegal business and encouraging to buy fake items”.
So how can counsel detect illicit listings? Usefully the researchers identified the keywords that most Instagram accounts selling counterfeit items share utilised. The most common was ‘original’ (featured in 36.4% of such posts), with ‘stock’ (17.94%) in second place. The combination of other keywords such as ‘replica’, ‘AAA’ and ‘1:1’ (used to describe a fake item that is similar to the original) cover about 18% of total terms. Utilising such keywords, the research identified 20,892 fake accounts selling counterfeit goods, which were collectively responsible for 14.5 million posts. By incorporating these keywords into their own policing activities, counsel should similarly be able to identify posts of concern.
However, tracking such keywords and branded hashtags will only work to a certain extent. With Instagram’s algorithm’s searching for suspicious accounts using hashtags and keywords, a number of posters are also now moving away from the traditional approach of combining images, product descriptions and hashtags to instead publish images with contact details, text or a QR code embedded in them. In many instances, images featured hashtags that did not match their image content and in several cases they included hashtags of different brands, even if they had nothing to do with the items depicted.
Complicating enforcement strategies further, many posters are shunning dedicated websites to facilitate sales and conducting their sales efforts through instant messaging tools and apps (thereby avoiding the risk that brand owners will seek to take down their ecommerce sites). The study revealed that over 75% of seller accounts listed two contact methods on their posts, with over half (52.67%) utilising Whatsapp, with WeChat in second spot (12.04% of posters).
Historically, Instagram has been lower down on the list of policing priorities when trademark counsel have considered their online enforcement strategies. That is changing and this latest research provides further reason to ensure that the platform is integrated into their efforts. However, it also highlights the complicated policing challenge that counsel must contend with. For its part, the report notes that Instagram has engaged in proactive efforts to purge bots and fake accounts from the site. Even so, counterfeiters are adept at continually evolving their tactics and it is clear that, faced with these adaptive strategies, trademark counsel face an uphill battle to make significant inroads into the prevalence of fake goods on such sites. It was ever thus, but could more be done by online actors, enforcement agencies and other stakeholders?
Considering the wider environment, on the World Economic Forum site Stroppa points to “a certain unwillingness or inability to make a coordinated and global effort to enforce effective security measures” across online platforms but says that the development of more robust detection technologies is critical. The report concludes that other solutions could include social media platform and instant messenger apps developing new technical filters and deploying further resources, open information-sharing among producers, authorities, hi-tech companies, consumer associations and other pertinent organisations, and public campaigns to promote broader awareness amongst users. In short, “this new surge of online counterfeit activity requires a comprehensive strategy and a cross-sector collaboration. Sooner rather than later”.
Such calls have been made before, and brand owners will clearly be supportive of any efforts to tackle the sale of counterfeit goods online. However, while this new research paper adds to the body of evidence on the scale of the problem, translating this into co-ordinated and meaningful action amongst the various stakeholders will remain a challenge. The status quo will certainly not change any time soon – leaving trademark counsel with a more and more difficult game of whack-a-mole to contend with.
By Trevor Little Courtesy World Trademark Review