There is a well-known trend among large language models of artificial intelligence to make up things when a user requests assistance. This trend has not bypassed the "official" chatbot of the authorities of New York, which provides false answers to important questions about local legislation and municipal policy.
In October of last year, the chatbot "MyCity" in New York was launched as a pilot program. In the ChatBot announcement, it was advertised as a way for business owners to save time and money by obtaining instant, useful, and reliable information from over 2000 NYC Business web pages and articles on topics such as compliance with codes and rules, business incentives, and avoiding violations and fines.
However, it was found that the MyCity chatbot provides dangerous incorrect information about some basic city rules. The chatbot provides blatantly wrong information about building rent in the city, wage payments, work hour norms, as well as various sectors of the city economy, such as prices for funeral services, and other aspects of life. At the same time, sometimes the chatbot also gave correct answers to the same questions.
MyCity is currently in beta status and informs users that it "may sometimes produce incorrect, harmful, or biased content" and that users "should not rely on its answers as a substitute for professional advice." But the page also states that it is "trained to provide official information about New York businesses" and is a way to "help business owners navigate government."
New York City's Department of Technology and Innovation spokesperson Leslie Brown says that the bot "has already provided timely and accurate answers to thousands of people" and that "we will continue to focus on updating this tool so we can better support small businesses."
In a recent Washington Post article, it was reported that chatbots integrated into mainstream tax preparation software provide random, false, or inaccurate answers to many tax queries. Such problems are prompting some companies to move away from more generalized chatbots to more specifically trained models that are configured with only a small set of relevant information. The American regulator FTC is seeking the ability to hold chatbot owners accountable for false or negligent information.
Source: Ars Technica
Comments (0)
There are no comments for now