Artificial Intelligence

 


                     Artificial Intelligence


“The science and engineering of making intelligent machines, especially intelligent computer programs”. -John McCarthy-



Artificial Intelligence is an approach to make a computer, a robot, or a product to think how smart human think. AI is a study of how human brain think, learn, decide and work, when it tries to solve problems. And finally this study outputs intelligent software systems. The aim of AI is to improve computer functions which are related to human knowledge, for example, reasoning, learning, and problem-solving.

The intelligence is intangible. It is composed of

  • Reasoning
  • Learning
  • Problem Solving
  • Perception
  • Linguistic Intelligence

The objectives of AI research are reasoning, knowledge representation, planning, learning, natural language processing, realization, and ability to move and manipulate objects. There are long-term goals in the general intelligence sector.



Applications of AI

· Gaming − AI plays important role for machine to think of large number of possible positions based on deep knowledge in strategic games. for example, chess,river crossing, N-queens problems and etc.

· Natural Language Processing − Interact with the computer that understands natural language spoken by humans.

· Expert Systems − Machine or software provide explanation and advice to the users.

· Vision Systems − Systems understand, explain, and describe visual input on the computer.

· Speech Recognition − There are some AI based speech recognition systems have ability to hear and express as sentences and understand their meanings while a person talks to it. For example Siri and Google assistant.

· Handwriting Recognition − The handwriting recognition software reads the text written on paper and recognize the shapes of the letters and convert it into editable text.

· Intelligent Robots − Robots are able to perform the instructions given by a human.

Major Goals

  • Knowledge reasoning
  • Planning
  • Machine Learning
  • Natural Language Processing
  • Computer Vision
  • Robotics



There are 3 types of artificial intelligence (AI): narrow or weak AI, general or strong AI, and artificial superintelligence.

We have currently only achieved narrow AI. As machine learning capabilities continue to evolve, and scientists get closer to achieving general AI, theories and speculations regarding the future of AI are circulating. There are two main theories.

One theory is based on fear of a dystopian future, where super intelligent killer robots take over the world, either wiping out the human race or enslaving all of humanity, as depicted in many science fiction narratives.

The other theory predicts a more optimistic future, where humans and bots work together, humans using artificial intelligence as a tool to enhance their life experience.

Artificial intelligence tools are already having a significant impact on the way we conduct business worldwide, completing tasks with a speed and efficiency that wouldn’t be possible for humans. However, human emotion and creativity is something incredibly special and unique, extremely difficult - if not impossible - to replicate in a machine. Codebots is backing a future where humans and bots work together for the win.

In this article, we discuss the 3 types of AI in depth, and theories on the future of AI. Let’s start by clearly defining artificial intelligence.

What is artificial intelligence (AI)?

Artificial Intelligence is a branch of computer science that endeavours to replicate or simulate human intelligence in a machine, so machines can perform tasks that typically require human intelligence. Some programmable functions of AI systems include planning, learning, reasoning, problem solving, and decision making.

Artificial intelligence systems are powered by algorithms, using techniques such as machine learning, deep learning and rules. Machine learning algorithms feed computer data to AI systems, using statistical techniques to enable AI systems to learn. Through machine learning, AI systems get progressively better at tasks, without having to be specifically programmed to do so.

If you’re new to the field of AI, you’re likely most familiar with the science fiction portrayal of artificial intelligence; robots with human-like characteristics. While we’re not quite at the human-like robot level of AI yet, there are a plethora of incredible things scientists, researchers and technologists are doing with AI.

AI can encompass anything from Google’s search algorithms, to IBM’s Watson, to autonomous weapons. AI technologies have transformed the capabilities of businesses globally, enabling humans to automate previously time-consuming tasks, and gain untapped insights into their data through rapid pattern recognition.

What are the 3 types of AI?

AI technologies are categorised by their capacity to mimic human characteristics, the technology they use to do this, their real-world applications, and the theory of mind, which we’ll discuss in more depth below.

Using these characteristics for reference, all artificial intelligence systems - real and hypothetical - fall into one of three types:

  1. Artificial narrow intelligence (ANI), which has a narrow range of abilities;
  2. Artificial general intelligence (AGI), which is on par with human capabilities; or
  3. Artificial superintelligence (ASI), which is more capable than a human.

Artificial Narrow Intelligence (ANI) / Weak AI / Narrow AI

Artificial narrow intelligence (ANI), also referred to as weak AI or narrow AI, is the only type of artificial intelligence we have successfully realized to date. Narrow AI is goal-oriented, designed to perform singular tasks - i.e. facial recognition, speech recognition/voice assistants, driving a car, or searching the internet - and is very intelligent at completing the specific task it is programmed to do.

While these machines may seem intelligent, they operate under a narrow set of constraints and limitations, which is why this type is commonly referred to as weak AI. Narrow AI doesn’t mimic or replicate human intelligence, it merely simulates human behaviour based on a narrow range of parameters and contexts.

Consider the speech and language recognition of the Siri virtual assistant on iPhones, vision recognition of self-driving cars, and recommendation engines that suggest products you make like based on your purchase history. These systems can only learn or be taught to complete specific tasks.

Narrow AI has experienced numerous breakthroughs in the last decade, powered by achievements in machine learning and deep learning. For example, AI systems today are used in medicine to diagnose cancer and other diseases with extreme accuracy through replication of human-esque cognition and reasoning.

Narrow AI’s machine intelligence comes from the use of natural language processing (NLP) to perform tasks. NLP is evident in chatbots and similar AI technologies. By understanding speech and text in natural language, AI is programmed to interact with humans in a natural, personalised manner.

Narrow AI can either be reactive, or have a limited memory. Reactive AI is incredibly basic; it has no memory or data storage capabilities, emulating the human mind’s ability to respond to different kinds of stimuli without prior experience. Limited memory AI is more advanced, equipped with data storage and learning capabilities that enable machines to use historical data to inform decisions.

Most AI is limited memory AI, where machines use large volumes of data for deep learning. Deep learning enables personalised AI experiences, for example, virtual assistants or search engines that store your data and personalise your future experiences.

Examples of narrow AI:

  • Rankbrain by Google / Google Search
  • Siri by Apple, Alexa by Amazon, Cortana by Microsoft and other virtual assistants
  • IBM’s Watson
  • Image / facial recognition software
  • Disease mapping and prediction tools
  • Manufacturing and drone robots
  • Email spam filters / social media monitoring tools for dangerous content
  • Entertainment or marketing content recommendations based on watch/listen/purchase behaviour
  • Self-driving cars

Artificial General Intelligence (AGI) / Strong AI / Deep AI

Artificial general intelligence (AGI), also referred to as strong AI or deep AI, is the concept of a machine with general intelligence that mimics human intelligence and/or behaviours, with the ability to learn and apply its intelligence to solve any problem. AGI can think, understand, and act in a way that is indistinguishable from that of a human in any given situation.

AI researchers and scientists have not yet achieved strong AI. To succeed, they would need to find a way to make machines conscious, programming a full set of cognitive abilities. Machines would have to take experiential learning to the next level, not just improving efficiency on singular tasks, but gaining the ability to apply experiential knowledge to a wider range of different problems.

Strong AI uses a theory of mind AI framework, which refers to the ability to discern needs, emotions, beliefs and thought processes of other intelligent entitles. Theory of mind level AI is not about replication or simulation, it’s about training machines to truly understand humans.

The immense challenge of achieving strong AI is not surprising when you consider that the human brain is the model for creating general intelligence. The lack of comprehensive knowledge on the functionality of the human brain has researchers struggling to replicate basic functions of sight and movement.

Fujitsu-built K, one of the fastest supercomputers, is one of the most notable attempts at achieving strong AI, but considering it took 40 minutes to simulate a single second of neural activity, it is difficult to determine whether or not strong AI will be achieved in our foreseeable future. As image and facial recognition technology advances, it is likely we will see an improvement in the ability of machines to learn and see.

Artificial Superintelligence (ASI)

Artificial super intelligence (ASI), is the hypothetical AI that doesn’t just mimic or understand human intelligence and behaviour; ASI is where machines become self-aware and surpass the capacity of human intelligence and ability.

Superintelligence has long been the muse of dystopian science fiction in which robots overrun, overthrow, and/or enslave humanity. The concept of artificial superintelligence sees AI evolve to be so akin to human emotions and experiences, that it doesn’t just understand them, it evokes emotions, needs, beliefs and desires of its own.

In addition to replicating the multi-faceted intelligence of human beings, ASI would theoretically be exceedingly better at everything we do; math, science, sports, art, medicine, hobbies, emotional relationships, everything. ASI would have a greater memory and a faster ability to process and analyse data and stimuli. Consequently, the decision-making and problem solving capabilities of super intelligent beings would be far superior than those of human beings.

The potential of having such powerful machines at our disposal may seem appealing, but the concept itself has a multitude of unknown consequences. If self-aware super intelligent beings came to be, they would be capable of ideas like self-preservation. The impact this will have on humanity, our survival, and our way of life, is pure speculation.

Is AI dangerous? Will robots take over the world?

AI’s rapid growth and powerful capabilities have made many people paranoid about the “inevitability” and proximity of an AI takeover.

In his book Superintelligence, Nick Bostrom begins with “The Unfinished Fable of the Sparrows.” Basically, some sparrows decided they wanted a pet owl. Most sparrows thought the idea was awesome, but one was sceptical, voicing concern about how the sparrows could control an owl. This concern was dismissed in a “we’ll deal with that problem when it’s a problem” matter.

Elon Musk has similar concerns around superintelligent beings, and would argue that humans are the sparrows in Bostrom’s metaphor, and ASI is the owl. As it was for the sparrows, the “control problem” is especially concerning because we may only get one chance at solving it.

Mark Zuckerberg is less concerned about this hypothetical control problem, saying the positives of AI outweigh potential negatives.

Most researchers agree that superintelligent AI is unlikely to exhibit human emotions, and we have no reason to expect ASI will become malevolent. When considering how AI might become a risk, two key scenarios have been determined as most likely.

AI could be programmed to do something devastating.

Autonomous weapons are AI systems programmed to kill. In the hands of the wrong person, autonomous weapons could inadvertently lead to an AI war, and mass casualties, potentially even the end of humanity. Such weapons may be designed to be extremely difficult to “turn off”, and humans could plausibly, rapidly lose control. This risk is prevalent even with narrow AI, but grows exponentially as autonomy increases.


AI could be programmed to do something beneficial, but develop a destructive method for achieving its goal.

It can be difficult to program a machine to complete a task, when you don’t carefully and clearly outline your goals. Consider you ask an intelligent car to take you somewhere as fast as possible. The instruction “as fast as possible” fails to consider safety, road rules, etc. The intelligent car may successfully complete its task, but what havoc may it cause in the process? If a machine is given a goal, and then we need to change the goal, or have to stop the machine, how can we ensure the machine doesn’t view our attempts to stop it as a threat to the goal? How can we ensure the machine doesn’t do “whatever it takes” to complete the goal? The danger is in the “whatever it takes” and the risks with AI aren’t necessarily about malevolence, they’re about competence.

Super intelligent AI would be extremely efficient at attaining goals, whatever they may be, but we need to ensure these goals align with ours if we expect to maintain some level of control.

Comments