spot_img
14.1 C.
Londra
spot_img
AcasăInteligența artificială și învățarea automatăYouCan't Trust Everything Generative AI Tells You. What to Do About It...

YouCan’t Trust Everything Generative AI Tells You. What to Do About It is described in this article.

A small town on the Ohio River called Marietta, it has fun boutiques, fascinating regional history museums, and stunning views of the nearby Appalachian foothills. It’s been a year or so since I visited, so I’m definitely expected to go again. The best Thai restaurant in the city, ChatGPT, is what I asked for a proposal. The bot obliged: Thai Taste. The issue is? That eatery is in Marietta, Georgia. There isn’t a Thai restaurant in the city of Ohio.

The “what’s the best Thai restaurant in this little area” problem was an incendiary example in a conversation I had with Katy Pearce, associate professor at the University of Washington and a university part of the UW Center for an Informed Public. It’s quite small in terms of examples. There are other great restaurants in Marietta, Ohio, and a Thai restaurant down the road in Parkersburg, West Virginia. However, it demonstrates a critical issue: When you’re using an AI robot as a search engine, you might receive a dreadful response hidden beneath levels of trust. Like gold retrievers, Pearce said, bots “really want to choose you”.

AI Atlas

Large language models ( LLM) like Google’s Gemini, OpenAI’s ChatGPT, and Anthropic’s Claude are increasingly becoming the default sources of global information. They’re displacing standard research vehicles as the first, and often only, area many people go when they have a problem. Tech companies are incorporating conceptual AI into search engine results, news feed reports, and other sources of information quickly. That makes it extremely important that we know how to tell when it’s giving us great information and when it isn’t– and that we treat everyone from an AI with the right level of skepticism.

( Disclosure: CNET’s parent company, Ziff Davis, sued OpenAI in April, alleging that it violated Ziff Davis ‘ copyrights when it was running and training its AI systems. )

In some cases, using AI instead of a search website can be helpful. Pearlce said she frequently visits an LLM for low-stakes issues where there is likely adequate information on the internet and the woman’s teaching history to guarantee a reliable response. Restaurant comments or basic home progress recommendations, for example, can be typically reliable. &nbsp,

Other conditions are more laden. If you rely on a robot to provide information on your health, finances, news, or politicians, those hallucinations may have serious repercussions.

” It gives you this authentic, comfortable solution, and that is the danger in these points”, said Alex Mahadevan, director of the MediaWise advertising education system at the Poynter Institute. It’s not always best, they say.

Here are some points to look out for and some tips on how to fact-check what you see when you ask relational AI.

Why didn’t you always have faith in AI?

To know why bots getting stuff bad or make things up, it helps to know a little about how they work. LLMs are trained in using a lot of data, and they make attempts to guess what will happen next based on the type. They are “prediction equipment” that produce outcomes based on the possibility that they’ll answers your concerns, Mahadevan said. Instead of examining the real correct answer, they’ll try to produce the solution with the best chances of being accurate mathematically.

Enlarge Image

A few state disagree with this response. Picture by Jon Reed/CNET

The LLM found a Thai restaurant in another city with the same name, close to 600 miles north of Marietta, but it couldn’t match my actual search criteria. The probability of that being the truth I wanted might get higher in the LLM’s inside estimates than the likelihood that I asked for something that doesn’t exist.

Pearce’s beautiful spaniel issue is the result. Like the family-favorite dog breed, the AI application is following its education: trying to do its best to make you happy. &nbsp,

When an LLM doesn’t find the exact best solution for your topic, it’ll give you some that sounds right but isn’t. Or it might create something that seems realistic out of thin air. That’s especially perilous. Some of these delusions are fairly well-known. You may very easily get that you shouldn’t place glue on your pie to make the cheese stick. Other situations are more subtle, such as when a design assigns false authors as authors in a model.

” If the application doesn’t have the information, it creates something new”, Pearce said. And that something novel might become completely out of the ordinary.

The information these designs were trained on also matters when it comes to reliability. Many of these huge methods were taught to use almost entirely new sources of information, including factual and arbitrary information that was posted online. &nbsp,

Humans also make things up. But a man can more effectively show you where they got their details, for example, or how they did their estimates. Even if an LLM uses its name, it might not be able to produce a reliable paper route.

So how do you understand what to believe?

Know what’s at margins

The correctness of what you get from a robot or other general AI tool does not matter all that much, but it could mean anything. Making wise decisions depends on knowing what’s at stake if you act on unreliable data, according to Pearce.

” When people are using conceptual AI tools for getting information that they hope is based in fact-based fact, the first thing that individuals need to think about is: What are the bets”? she stated.

Getting recommendations for a song music? Low bets If a robot tries to make up an Elton John track, that’s hardly worth stressing over.

However, the stakes are higher for you when it comes to your ability to access reliable news and information about the globe, as well as your health maintenance.

Consider the types of information the designs were trained on. For health queries, keep in mind that instruction information may have included some medical journals, but it may also have been based on scientific information board threads and social media posts.

Often fact-check any information that could lead to you making a huge decision– think of things that could change your money or your life. Gen AI’s propensity to mix things up or make things up should make you question those decisions, as well as the information that supports those decisions.

” If it’s something that you need to be fact-based, you need to absolutely triple-check it”, Pearce said.

Gen AI alters how you verify information online.

Poynter’s MediaWise program has been teaching media literacy well before ChatGPT burst on the scene at the end of 2022. The key to generative AI was to determine the source, according to Mahadevan. If you see something on Facebook or in a Google search result that claimed a celebrity or politician has died, you could check the background of the source providing the information to see if it’s reliable. Is it being reported by a major news outlet? Probably reliable. Your cousin’s neighbor’s ex-husband is the only person who can provide that information. Maybe not. &nbsp,

Even though that advice is straightforward, plenty of people ignore or don’t understand it. People have always had trouble evaluating information online, Mahadevan said.

For AI, that advice no longer works as well as it did. The responses of a chatbot may be completely out of context. You ask it a question and it returns an answer, and your instinct may be to trust the AI model as the expert. This contrasts favorably with a traditional Google search or social media post, where the source is at least somewhat prominently displayed. Some chatbots will give you sources, but often you will just get an answer without sourcing. &nbsp,

” With]chatbots ], we don’t really know who’s behind the information”, Mahadevan said. &nbsp,

Instead, you may have to search elsewhere to find the ultimate sources of information. Like social media, generational AI is a medium for distributing information rather than a source of information. Just as the who behind a social media post matters, so does the ultimate source of any information you get from a chatbot.

Read more at &nbsp, AI Essentials: 27 Ways to Make Gen AI Work for You, authored by our experts.

How to get the truth ( or something closer to it ) from AI

Check with multiple trustworthy sources to make sure you’re getting good information on the internet as it always has.

But when working with gen AI, here are some ways to improve the quality of what you receive:

Use a citation-generating tool.

Many chatbots and other gen AI models today will provide citations in their responses to you, although you might need to ask for them or turn on a setting. This feature also appears in other AI use cases, such as Google’s AI Overviews, in search results. &nbsp,

The presence of citations themselves is good; a response may be more reliable than one without citations, but the model could be using sources as a substitute for the source’s interpretation. Especially if the stakes are high, you probably want to click on the links and make sure the summary you’ve been provided is accurate.

Question the AI’s level of confidence.

You can get a better answer from a chatbot by writing your prompt carefully. Asking for confidence levels is a trick. For example:” Tell me when the first iPhone came out. Tell me how confident you are in my response.

You’ll still need to bring skepticism to your evaluation of the answer. Even if the chatbot tells you that the first iPhone was released in 1066, you’ll likely still have doubts. &nbsp,

Mahadevan advises understanding the distance between your chatbox window and the source of the information:” You have to treat it as if you’re being told it secondhand,” he said.

Don’t just ask a quick question

When crafting your prompt, phrases like “tell me your confidence level,” “provide links to your sources,” “offer alternative viewpoints,” “only derive information from authoritative sources,” and” closely evaluate your information” can add up. Pearce said good prompts are long– a paragraph or more to ask a question. &nbsp,

You can also ask it to play a certain role to get answers in the right tone or from the perspective you’re seeking. &nbsp,

” When you’re prompting it, adding all these caveats is really important”, she said.

Bring your own information

LLMs may struggle most when pulling information from their training data or from their own searches for information on the internet. However, if you give them the documents, they might perform better.

Mahadevan and Pearce both said they’ve had success with generative AI tools summarizing or extracting insights from large documents or data sets. &nbsp,

Pearce said she was shopping for a vehicle and provided ChatGPT with all the information she wanted it to consider– PDFs of listings, Carfax reports, etc. — and instructed it to concentrate solely on those potential vehicles. It offered a deep, detailed analysis of the vehicles. Pearce wanted to avoid random other vehicles it found online, but it didn’t recommend them.

” I had to give all that information to it”, she said. It wouldn’t have been so wealthy if I had just searched for “look on” a used car dealership’s website.

Use AI as a starting point

Mahadevan compared the reliability of Wikipedia to what it was decades ago when generative AI was used today. Many people were skeptical that Wikipedia would be accurate because it was freely editable. However, one benefit of the free encyclopedia is that its sources are simple to locate. You might start from a Wikipedia page and end up reading the articles or documents that the entry cites, getting a more complete picture of what you were looking for. Mahadevan refers to this as “reading upstream.”

” The core thing I always try to teach in media literacy is how to be an active consumer of information”, he said. Gen AI can disconnect you from those primary sources, but it shouldn’t. You just have to keep digging a little more. &nbsp,

spot_img

cele mai recente articole

explorează mai mult

LĂSAȚI UN MESAJ

Vă rugăm să introduceți comentariul dvs.!
Introduceți aici numele dumneavoastră.

ro_RORomanian