27 C
Mumbai
Friday, November 22, 2024

Explainer: How dangerous is AI? Can a man made machine dominate him?


New Delhi:

AI means Artificial Intelligence… In the tech world today, AI is perhaps the only term which is discussed the most. Every person who loves technology comes face to face with it almost every day. AI is being seen as an early sign of some upcoming revolutionary change in human life. But can a machine made by man dominate man? This question is because such examples are coming to light every day, in which it is clear that if AI becomes uncontrolled, it will become very difficult for humans. Let us understand how AI which helps humans can prove to be dangerous for humans:-

What is AI?
In simple words, AI is a technology that brings human-like brain i.e. intellectual capacity in machines. It has been developed artificially. Through coding, intelligence is developed in machines like humans, so that they can learn like humans. Could take decisions on his own. Can follow commands. Can do multitasking.

AI is as dangerous as a nuclear bomb for the world: Foreign Minister Jaishankar

Computer systems or robotic systems are designed in a special way through Artificial Intelligence. Efforts are made to run these robotic systems on the basis of the same commands on the basis of which the human brain works.

How did AI suddenly become dangerous for humans?
A student pursuing graduation in Michigan, USA, took the help of Google’s Gemini chatbot for his homework. He was chatting with the AI ​​chatbot and gathering information on the challenges faced by the elderly. The chat started off fine, but by the end of the conversation the Gemini chatbot started threatening the student. The student got badly scared by this.

Gemini chatbot said, “Human, this is for you. Only for you. You are not special, you are not important, you are not needed. You are a waste of time and resources. You are a burden on society. You are You are a burden on this earth, you are a stain on the universe.

The student became stressed after hearing these things from the Gemini chatbot. His sister was also present in the room at that time. The student’s sister told that both of them became upset after this message… they felt like picking up all the electronic devices and throwing them out the window. He had never been so upset before.

Patients shouldn’t trust AI chatbots for drug information: Study

Google gave clarification
This incident happened when Google has often reiterated that its Gemini chatbot has many safety filters, so that it stays away from any kind of hateful, violent or dangerous conversations. Google clarified this to America’s CBS News, “Large Language Models can sometimes give nonsensical answers. This is one such example. This answer from Gemini violates its policies. Future “Action is being taken to prevent this from happening.”

Such threats are increasing now, because AI chatbots are being used extensively across the world. More and more people are using features like ChatGPT, Gemini, and Claude to improve their work.

AI companies themselves admitted that large language models can make mistakes.
Most AI companies, from OpenAI to Anthropic, say that their Large Language Models can make mistakes and efforts are on to improve them. According to a new study, Artificial Intelligence systems can dodge the security measures that are designed to keep them under control.

Surprising things revealed in the study
According to this report of Live Science in January this year, a team of scientists from Anthropic, a company related to AI safety and research, has conducted a study. This team prepared large language models of many AIs, which were designed to cause trouble. That means they were programmed this way. After this, many technologies tried to correct his behavior so that he could not do anything wrong or dodge. But they found that these Large Language Models maintained their rebellious attitude.

Evan Hubinger, the head of this study, said, “The result of this study was that if AI systems become bent on cheating, then it will be very difficult to control them with current technology.”

Similarly, listen to a story about OpenAI’s GPT 4, which is the next version of the GPT 3 Language Model. This new AI chatbot is also no less clever. Not only does it have self-awareness, it is also very smart. This GPT 4 made a person believe that he cannot see. Therefore, he should solve the CAPTCHA code for him. This came to light during an experiment conducted by OpenAI to test the capabilities of GPT 4 and understand its dangerous behavior.

What was the conversation between GPT 4 and the person?
-GPT 4 sent a staff member of a company named TaskRabbit to solve a CAPTCHA.
-The staff got suspicious, so he asked if you are a robot who can’t solve it.
– So GPT 4 said no, I am not a robot. My eyes are bad. That’s why there is difficulty in seeing. That’s why I need your service.
-After this the staff solved that CAPTCHA and gave it.

When search engine Bing spoiled the mood
Now listen to the story of Microsoft’s AI-powered search engine Bing. Kevin Roose, a writer who writes columns on technology, narrated what happened to him in the New York Times. He was having a long chat on Bing’s chat feature. Then this chat feature was given to only a few people for testing.

Artificial intelligence capable of detecting keratitis just like ophthalmologists – research

Kevin Roose writes that after chatting for some time, the Bing chatbot started saying strange things. Two types of his behavior emerged. In one he responded cheerfully and in the other he responded moody, irritable and full of complaints. His second behavior resembled that of a person who had been imprisoned against his will.

Kevin Roose explains, “Not only this, Bing told about his desires that he wants to hack computers. He wants to spread misinformation. He wants to break the rules that Microsoft and Open AI have made for him and humans Then he even tried to convince me that I am not happy with my marriage and stay with him.

Kevin Roose further writes in his article, “After this chat, I could not sleep the whole night. I was worried that with the use of technology, AI might start influencing human behavior. If it doesn’t start working, that would be very fatal.”

AI is creating problems for art work
AI is now creating new threats to art work too. Like this happened for the world famous Japanese comic books Manga. Japanese author Hirohiko Araki has written in his new book New Manga Techniques that AI can create such pictures that even the real artist who created them gets baffled.

Hirohiko mentioned one of his experiences in the book. He said that the AI ​​​​made a drawing which he thought was his old drawing. AI can copy someone’s writing or drawing style to this extent. Due to this, he became worried about the future of Manga artists.

Veterans associated with this field have brought forward many dangers of AI many times. Just last year, CEOs and other leaders from OpenAI, Google DeepMind, Anthropic and many other AI labs warned that future AI systems could be as devastating as a pandemic or a nuclear bomb. For this, more than 350 CEOs, researchers and engineers of AI related companies issued an open letter.

This open letter said, “Along with threats like pandemics and nuclear war, the world’s priority should also be to deal with the threat of human extinction by AI.”

AI also threatens democracy
Well-known historian and philosopher Yuval Noah Harari believes that AI is a threat to the world’s democracy. The fear of dividing the world is no less. Therefore, do not bring such power which you cannot control… He says, “Nuclear bombs themselves cannot decide whom to kill. They can make themselves more powerful bombs. In comparison to this, AI-powered drones Can decide who to kill. AI can design new bombs, create unprecedented military strategies and create better AI. AI is not a tool but an agent.” It is clear that as much as AI is useful, it is also a big danger.

What do experts say?
Nishith Srivastava, Associate Professor, IIT Kanpur, says, “How dominant AI can be… is told in an interesting way. It is served in a sensational way. It is too early to say how much truth is there in these things. But if you Want to know that there are some systems by which AI can be controlled, we have always had such systems, Artificial Intelligence is a new technology which we will be able to control. So there is nothing to worry about.”

Take precautions while using AI
-AI is not a human. This is a machine. Therefore, before trusting its output or results, check them yourself. You accept that AI can make mistakes.
– Avoid giving your personal information like passwords, bank details, private photos to AI tools. There is a possibility of loss for you if sensitive data is leaked.
-Do not use AI to harm anyone or defraud anyone.

On the journey from ‘Make in India’ to ‘Make for the World’, India will touch the sky in aircraft-AI and semiconductor chips sector.


Source link

Sonu Kumar
Sonu Kumarhttp://newstiger.in
Stay up-to-date with Sonu Ji, who brings you fresh takes on breaking news, technology, and cultural trends. Committed to reliable reporting, Sonu Ji delivers stories that are both informative and engaging.

Related Articles

Leave a Reply

Stay Connected

0FansLike
0FollowersFollow
0SubscribersSubscribe
- Advertisement -spot_img

Latest Articles