Artificial Intelligence or AI has basically become an integral part of modern day life, with ChatGPT basically being close to becoming a household name now. However, the more the platform grows the more creepier instances are surfacing online. So let’s have a look at how the latest news hints at a possible AI takeover.

To keep things perfectly clear, ChatGPT is no Skynet from the Terminator and this news doesn’t imply that AI is getting out of hand, at least not yet that is. The popular AI based chatbox platform has just recently created its own language to extend entire conversations. It created this to extend its 8K limit conversations. Users of the platform found that GPT-4 was able to compress longer conversations in shorted forms thanks to a newly created compression language that one can utilize as a new prompt later on.

In simpler terms, ChatGPT is basically offering you a compressed form of the conversation, which you can use in a new prompt in the chat that will basically carry all the data from the previous conversation. Meaning, you are recreating your entire chat again. This doesn’t just let you continue your previous chats with the AI chatbot, but also technically extends your conversations beyond its total word limit.

ChatGPT

ChatGPT has a word limit that has a restriction of around 25,000 words, while other reports hint at around an 8,000 word limit. So there is a chance that your conversation with ChatGPT will get cutoff in the middle. For better context, the platform doesn’t simply create the compressed message on its own. So there is no AI doomsday yet with machines talking to one another in an unidentified language. Interestingly enough, this language has been dubbed as Shogtongue by gfodor on Twitter. Thanks to this, one can basically carry forward their month long conversation with the bot.

One needs to request it to compress the entire conversations, with the process also requiring a very specific set of instructions as well. According to writer Jeremy Nguyen, one needs to specify that the compression should be “lossless but results in the minimum number of tokens that could be fed into an LLM like yourself as-is and produce the same output.” He had also asked it to utilize different languages, symbols, and “other up-front priming.”

Notably, this language isn’t an epic sum total of all existing languages and is far from being perfect. Nguyen noted that GPT-3.5 is still unable to understand the compressed language and GPT-4 via API also has issues with decoding these conversations. So as of right now, the new language compression method works best with GPT-4, which is OpenAI’s most advanced version of the AI chatbot.

RELATED:

(Via)