Meta, the recently adopted company name of what was once known as Facebook, has unveiled their “Next-Gen AI Supercomputer” that will be censoring the broadly-defined “harmful content” like never before in anticipation of the metaverse.

Tired of the ads? Go Ad-Free and Get EXCLUSIVE Content, Become a PREMIUM USER

Per a January 24th press release from Meta, they formally introduced what they call their “AI Research SuperCluster” which the company proclaimed, “is among the fastest AI supercomputers running today” and will eventually become the fastest supercomputer in existence once completed by this summer.

Meta’s Research SuperCluster (RSC) will assist the company’s AI researchers in stablishing more robust AI models that will be able to “learn from trillions of examples,” spanning across hundreds of languages in text, images and video.

When explaining why Meta needs such a powerful AI supercomputer, the trope of a “critical” use case was that of “harmful content,” according to the press release.

“To fully realize the benefits of advanced AI, various domains, whether vision, speech, language, will require training increasingly large and complex models, especially for critical use cases like identifying harmful content. In early 2020, we decided the best way to accelerate progress was to design a new computing infrastructure — RSC.”

Back in early December, Meta released a blog post about how “Harmful content can evolve quickly,” and that the benefits of enhanced AI can learn the nuances of what Meta would consider to be said harmful content that their traditional AI might overlook.

** Join Our Community **

Apparently, Meta is determined to “keep people safe” on their platforms and especially the forthcoming metaverse, boasting of how this new supercomputer will be able to determine whether anything done on their platform is “harmful or benign.”

If The Election Was Today, Who Would Get Your Vote? - Trump or DeSantis?

By completing the poll, you agree to receive emails from Red Voice Media, occasional offers from our partners and that you've read and agree to our privacy policy and legal statement.

“With RSC, we can more quickly train models that use multimodal signals to determine whether an action, sound or image is harmful or benign. This research will not only help keep people safe on our services today, but also in the future, as we build for the metaverse. As RSC moves into its next phase, we plan for it to grow bigger and more powerful, as we begin laying the groundwork for the metaverse.”

In reality, this means that the censorship on platforms like Facebook are about to get a lot more heavy-handed.

<Support Our Efforts To Keep Truth Alive>