Social media platforms such as Facebook, YouTube and Instagram provide real-time infrastructure for user autonomy with content delivery and distribution. In light of the recent live streaming of the Christchurch shooting incident, questions are being raised about the responsibilities of these social media platforms.
Remember Uncle Ben saying this to Peter Parker in Spiderman:
With great power comes responsibilities
If you were the product owner/CEO of such a platform, how would you guarantee social responsibility?
Let me give it a shot!
- First, I would ensure that there is a two stage authentication for all new accounts being created. When a user creates a new account, he/she is asked to provide an email address and a mobile number. This is not optional. but must be mandatory (currently when you create a Gmail account, providing mobile number is optional). A link is sent to the email address to confirm the email address. A security code is sent to the mobile number. The security code must be entered in the email confirmation page before the new account can be created.
- Second, limit document upload to word or rich text format so that while the document is being uploaded, a special script can be run to read the content and stop the publish if offensive content is discovered.
- Third, all image uploads to be assessed during the uploads activity so that images with offensive content are stopped from publishing.
- Fourth, come up with a program that can use AI and facial recognition system to detect illegal content in videos being uploaded and potentially stop the publish. In recent time, we have come across fake videos that have been created using AI tools. Such a video is shown below. [courtesy: https://www.news.com.au/technology/online/security/how-disturbing-ai-technology-could-be-used-to-scam-online-daters/news-story/1be46dc7081613849d67b82566f8b421 ]
If you look at the video closely, you will see that the voice has been mimicked very well even though you can still detect that it is not the exact voice. Also there is a slight lag (very little though) between voice and facial movement. But as these AI tools will become more sophisticated, more and more such fake videos will come out and it will become more difficult to distinguish between real and fake ones.
- Finally, the main problem will be with Live streaming video. Unlike uploaded video where a special program/tool can assess the video frame by frame to detect offensive content, we would need special AI tools to detect and stop live streaming feed instantly. How this can be done, I don’t know. But it will have to be cutting edge AI tool to stop live streaming with illegal content in real-time.
- You can also upload photo, video via other social media tools such as Viber, Whatsapp etc. Same principle will need to be applied to these tools as well. There will be the need for a world body/organization to provide rules and standards for all social media providers to follow.
There is now a shared and common realization that social media platforms have certain responsibilities towards how content is shared across millions of people. It is time we wake up to that shared understanding and realization.
[Feature Image: courtesy of https://medium.com/@stevanmcgrath39/how-to-market-using-live-streaming-on-social-media-platforms-9da9b784c3bd]