New Study Finds DeepSeek-Chat AI Content Highly Detectable—Raising Questions About Its Origins
A new study conducted by Originality.ai has found that text generated by DeepSeek-Chat is 99.3% detectable using the company’s AI content detection models.
This suggests that DeepSeek-Chat’s output is highly distinguishable from human-written text, reinforcing the effectiveness of AI detection technology.
The study analyzed 150 DeepSeek-Chat-generated text samples, comparing their detectability across multiple AI content detection tools.
Key Findings:
Originality.ai’s models achieved 99.3% accuracy in detecting DeepSeek-Chat content, outperforming competitors
- GPTZero: 97.3% accuracy
- RapidAPI’s AI Content Detector: 80.7% accuracy
DeepSeek-Chat could possibly be a distilled version of OpenAI’s LLMs.
Originality.ai rigorously evaluates newly released LLMs against its AI detection models to measure efficacy. Typically, new models initially lower detection accuracy before engineers retrain the system to close the gaps in order to restore peak detection performance.
However, DeepSeek-Chat did not cause this expected drop in accuracy. This unusual result has led researchers at Originality.ai to believe that DeepSeek-Chat could possibly be a distilled version of OpenAI’s ChatGPT or another existing LLM.
Adding weight to this theory, Bloomberg and the BBC have reported that OpenAI and Microsoft are investigating whether OpenAI’s technology was used or obtained in an unauthorized manner in relation to DeepSeek.
Read the full study here:
Jonathan Gillham
Originality.ai
+1 705-888-8355
email us here
Visit us on social media:
Facebook
X
LinkedIn
YouTube
Distribution channels: Banking, Finance & Investment Industry, Companies, Technology, U.S. Politics, World & Regional
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
Submit your press release