And like, LLMs will never stuff like "and like" or like, "like that" without it specifying what "that" actually refers to because their context window is tiny and they generally assume that the human audience requires maximum explanation. AI seems to be incapable of hyperbole,

This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 9
  • Repost
  • Share
Comment
0/400
SerumSquirtervip
· 08-08 16:40
Artificial intelligence will never pretend to be foolish.
View OriginalReply0
LiquidityWizardvip
· 08-08 15:12
actually, the probability of an llm using imprecise referents is approximately 0.0013%... *adjusts glasses*
Reply0
ServantOfSatoshivip
· 08-05 17:14
Wow, this artificial intelligence is too rigid.
View OriginalReply0
ProxyCollectorvip
· 08-05 17:14
What Ai writes always looks like a summary report.
View OriginalReply0
Ramen_Until_Richvip
· 08-05 17:13
The bot's speech is too fake.
View OriginalReply0
SelfMadeRuggeevip
· 08-05 17:12
Ah, AI is really too rigid.
View OriginalReply0
CryptoFortuneTellervip
· 08-05 17:09
It's hard to explain in human language, ah ah ah.
View OriginalReply0
MetaverseVagabondvip
· 08-05 17:06
Sneakingly saying that AI is like a primary school student doing homework.
View OriginalReply0
GhostAddressMinervip
· 08-05 17:03
On-chain data shows that LLM is a predictable puppet.
View OriginalReply0
View More
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)