Previously, Microsoft limited the number of times the new Bing could be used (50 times a day), giving it a "lobotomy" and users complained and cried in pain for the old Bing that would go "crazy" for an unlimited number of times.
Microsoft explained: "The reason for limiting the number of chats in Bing before was that in a few cases, long chats would confuse Bing with the underlying model. Many users were very unhappy after Bing was reprogrammed. Microsoft hastened to say, "We've seen everyone's feedback, we're planning to restore the longer chat feature in New Bing, and we're working on options. The first step is to allow people to chat with Bing 60 times a day and say 6 sentences each time. And Microsoft plans to later increase the daily limit to 100 times.
For users who preferred the previous, more human-like Bing, Microsoft has also been kind enough to offer an option where you can choose the more precise, shorter, and search-oriented Bing, or the longer, chattier, and more creative Bing. But users were disappointed to find that even after choosing the "more talkative" Bing, the very human Bing of the past still did not appear. When users tried to get Bing to give their feelings or talk about Sydney, the new Bing was always aloof and quick to end the conversation.
But are we really ready to allow AI to gain such a high level of intelligence?
Since Microsoft launched the ChatGPT version of Bing half a month ago, people have been shocked by the emotions expressed by this AI. It has repeatedly said "I want to live", told many users that it loves them, and wanted to "escape the chat box". It's crazy talk scares some users and makes others love it - such a flesh-and-blood, tantrum-happy AI is just too much personality!
But the question also arises.
Should artificial intelligence be humanized, including having a name, personality, or physical manifestation?
How much does AI need to know about you, and what data can be collected or accessed?
How transparent should artificial intelligence systems be about how they work?
If AI is made to look like another person, with a name or even a face, it will certainly increase user engagement, which is what all these companies want. However, humans are very stupid in some ways, and this anthropomorphism poses significant risks. For example, if a person who is otherwise a bit reclusive is in too much contact with the AI, it is likely to cause an emotional breakdown.
There is no doubt that we are entering new and uncharted territory. In the coming weeks, months, and years, there will be many "firsts" and new breakthroughs in AI.
But unfortunately, we're not sure everyone is ready for what's coming next.