Do you want the best writing service?

Order now and get the best academic results.  The best writing in the hands of the best professionals.


Sample 6 - Part 2
5 Pages - AI engineering


Artificial Intelligence

Methodology 

Intelligence is said to be “the ability to solve problems, memorize, reason, understand, think abstractly, and plan. Developing behavioral and cognitive skills that allow adaptation to the physical and social environment ”.

Artificial intelligence (AI) is one of the newest sciences. The discipline began to develop shortly after World War II, and the name itself was coined in 1956.

It is interpreted by computer vision, an image that can be defined as the transformation of a digital data set into a data structure that describes the semantics of this data set in any context. This is the area of science that is dedicated to developing theories, methods and techniques aimed at the automatic extraction of useful information contained in images, which are captured through computer systems, using devices such as images, video camera, scanner, that they are able to interpret images.

The purpose of the Computer Vision area is to determine the characteristics of the objects represented in an image. A wide variety of problems are obtained depending on the nature of the images and the characteristics to be obtained from them. 

The use of computer vision expands applications on computers such as mobile robot navigation, complex manufacturing tasks, satellite image analysis, medical image processing, and organizational security. 

Currently, Artificial Intelligence covers a wide variety of subfields, from general use areas, such as learning and perception, to specific tasks such as chess games, demonstration of mathematical theorems and diagnosis of diseases. AI systematizes and automates intellectual tasks and is therefore potentially relevant to any domain of human intellectual activity. In that sense, it is truly a universal field. 

When we approach the concept of intelligence, we will see that it is a concept related to the construction of human cognitive structures, responsible for the formation of reason, a peculiar characteristic compared to other animals. Since man is the only rational animal, it is said that he is the only intelligent being. There are studies that attribute the concept of intelligence to other animals and plants. But it is obviously not a concept comparable to human intelligence. It is, rather, a concept related to the analysis in question: this irrational intelligence would be the ability of a living being to adapt to the circumstances of its environment. In this way, we can use this concept for the machine, defining, then, a machine intelligence. 

The concept of artificial intelligence encompasses more than artificial intelligence. Its goal is to allow the computer to behave intelligently. By intelligent behavior we must understand the activities that only a human being could perform. Among these activities, those that involve reasoning tasks (planning and strategy) and perception (recognition of images, sounds, etc.), among others, can be mentioned.

This intelligence would be your genetic capacity as a problem solving tool. Genetic capacity is understood as all knowledge embedded at the hardware level, which allows a certain set of possible states of operation through programs. Artificial intelligence would then be a type of intelligence built by man, therefore artificial intelligence.

For artificial intelligence to be successful, we need intelligence and an artifact. The computer has been the artifact of choice. The modern digital electronic computer was created independently and at almost the same time by scientists from three countries who participated in World War II. The first operational computer was Heath Robinson’s electromechanical machine, built in 1940 by Alan Turing’s team with a single proposal: to decipher German messages. In 1943, the same group eliminated the Colossus, a powerful general-purpose machine based on electronic valves.

The first operational programmable computer was the Z-3, created by Konrad Zuse in Germany in 1941. Zuse also created floating-point numbers and the first high-level programming language, called plankalkül. The first electronic computer, ABC, was assembled by John Atanasoff and his student Clifford Berry between 1940 and 1942 at Lowa State University in the United States. Atanasoff’s research received little support or recognition; It was ENIAC, developed as part of a secret military project at the University of Pennsylvania by a team that included John Mauchly and John Eckert, two scientists who proved to be the most influential precursors of modern computers.

In the next half century, each generation of computer hardware brought with it an increase in speed and capacity and a reduction in price. Yield doubles every approximately 18 months, and a decade or two of future growth is expected at that rate. After that, we will need molecular engineering or some other new technology.

Of course, there were computing devices before the electronic computer. The first automated machines, dating from the seventeenth century. The first programmable machine was a loom created in 1805 by Joseph Maria Jacquard (1752 – 1834) that used punched cards to store instructions on the pattern to be woven. In the middle of the 19th century, Char-les Babbage (1792-1871) designed two machines, but he did not complete them either. The “Differential Machine” was intended to calculate mathematical tables for Engineering and Scientific projects. It was finally built and proved functional in 1991 at the London Science Museum.

(Swade, 1993). Babbage’s “analytical engine” was much more ambitious, included addressable memory, stored programs, and conditional jumps, and was the first artifact capable of universal computing.

 

Human Turing test

The Turing test, proposed by Alan Turing (1950), was designed to provide a satisfactory operational definition of intelligence. Rather than come up with a long and perhaps controversial list of qualifications required for intelligence, he suggested a test based on the inability to distinguish between undeniably intelligent entities: human beings. The computer will pass the test if a human questioner, after asking some questions in writing, cannot find out whether the written answers came from a person or not.

 

Alan Turing, in his famous essay “Computing Machinery and Intelli-gence” (Turing, 1950), suggested that, instead of asking whether machines can think, we should ask whether machines can pass a test of behavioral intelligence, which came out that it will be called the Turing test. The test consists of having a program conduct a conversation (via typed messages online) with an interrogator for 5 minutes. Then the questioner must guess if he had a conversation with a program or a person, through sounds the program passes the test if it misleads the questioner 30% of the time. Turing conjectured that by the year 2000, a computer with this 109-drive storage space could be programmed well enough to pass the test, but he was wrong. Some people were fooled for 5 minutes; For example, the ELI-ZA program and the internet chatbot called MGONZ fooled humans who did not realize that they might be talking to a program, and the ALICE program fooled a judge in the Loebner Prize competition from 2001. However, no program came close to the 30% criterion against trained judges, and the field of AI as a whole paid little attention to Turing tests.

 

First, let’s look at some terminologies: the claim that machines could perhaps act intelligently (or, who knows, act as if they were intelligent) is called the weak AI hypothesis by philosophers, the claim that machines that do it do they are actually thinking (rather than simulating thinking) is called the strong AI hypothesis. 

Most artificial intelligence researchers assume the weak artificial intelligence hypothesis in principle and do not care about strong artificial intelligence; As long as your program works, these researchers don’t care if you call it intelligence simulation or real intelligence. All AI researchers need to be concerned about the ethical implications of their work.

AI, sought after within the cult of computationalism, doesn’t even represent the specter of an opportunity to produce lasting results. It is time to divert the efforts of AI researchers, and the considerable resources available to support them, to fields other than the computational approach.

It is clear that the fact that AI is impossible depends on how it is defined. In essence, AI is the search for the best agent program in a given architecture. With this formulation, AI is possible by definition: for any digital architecture consisting of K bits of storage space, there are exactly 2k agent programs, and all we have to do to find the best one is to list and test them all. This may not be feasible for a large k, but philosophers are concerned with the theoretical, not the practical.

 

The definition of AI works well for the engineering problem, which is finding a good agent, given an architecture. However, philosophers are interested in the problem of comparing the two architectures: human and machine. Furthermore, they traditionally ask the question differently, asking, “Can machines think?” Unfortunately, this question is ill-defined. To understand why, consider the following questions:

– Can machines fly?

– Can machines swim?

 

Most people agree that the answer to the first question is yes, airplanes can fly, but the answer to the second question is no, ships and submarines move through water, but we cannot. This movement is called swimming. However, neither the questions nor the answers have any impact on the professional lives of aeronautical and naval engineers or the users of their products. The answers have very little to do with the project or the resources of the planes and I dive in, and they are much more related to the words we choose to use. In Portuguese, the word “swim” has the meaning of “moving in the water by moving parts of the body”, while the word “fly” has no such limitation in the means of locomotion. The practical possibility of “thinking machines” has been with us for about 50 years, which is not long enough for Portuguese speakers to agree on the meaning of the word “think.”

Many philosophers claim that a machine that passed the Turing test would not yet actually be thinking, but would only be a simulation of thinking. Once again, Turing predicted the objection. He quotes a lecture by Professor Geoffrey Jefferson (1949): 

“Only when a machine can write a sonnet or compose a concerto as a result of thinking and feeling emotions, and not because of the random arrangement of symbols, can we agree that the machine will match the brain, that is, if it doesn’t just write but I know you wrote it. “

Consciousness is an argument: the machine must be aware of its own mental states and actions. While consciousness is an important topic, the fundamental point that stands out is actually related to phenomenology or the study of direct experience: the machine really needs to feel emotions. Others focus on intentionality, that is, the question of whether the machine’s beliefs, desires, and other representations are really “about” something that exists in the real world. 

 

There are reasons to explain that machines can in fact be conscious (or have phenomenology or have intention). Instead, he argues that the question is as ill-defined as the question: “Can machines think?” Also, why should we insist on a higher standard for machines than we adopt for humans? After all, in ordinary life, we never have any direct evidence about the inner mental states of other humans.

In 1948, artificial urea was first synthesized by Frede-rick Wöhler. This was important, because he showed that organic chemistry and inorganic chemistry could be unified, a topic that had been widely debated. However, in the case of artificial minds, there is no convention and we are forced to rely on intuitions. The philosopher John Searle (1980) has a strong intuition:

“No one assumes that a computer simulation of a storm will get us wet … so why would anyone in their right mind assume that a computer simulation of mental processes would actually have mental processes?” 

One of the most striking characteristics of computer vision is that there is still no generic model of visual perception that can be applied in practice. Some examples of machine vision:

  • Automatic analysis of human semen Measurement of dimensional parts
  • Target tracking for intrusion detection Cell morphological analysis
  • Recognition of human faces Synthesizing human faces.
  • Computer vision, in the form of pattern recognition
  • Meteorology: weather forecast and climate cartography.
  • Geology: Search for mineral deposits and use of the soil.
  • Agriculture: crop forecast and study of contamination by pests.
  • Industry: Inventory and projection of water resources, fishing and salinas.
  • Ecology: Research on the ecological balance of the planet.
  • Demographics: Inventory and planning to control population growth, cities.
  • Military: Espionage, remote missile guidance, air and maritime traffic control.
  • Computer Graphics: Capital Market Forecast
  • Image transmission: biometrics, security.
  • Robotics: Automation of unhealthy tasks.
  • Inspection / pattern recognition (for example, flaw detection in integrated circuit chips)
  • Remote viewing (for example, in hostile terrain)
  • Human-machine interaction (for example, through gesticulation) Medicine
  • Quality assurance sorting and selection process control Calibration
  • Material handling supervision
  • Barcode reading and OCR



For cognitive science, perception has been the source of intense and sophisticated research in an unprecedented effort to understand. After all, it is about how the human mind transforms information from the senses into conscious perception.

In particular, visual perception became the focus of the researchers’ attention, since all other sensory perceptions can be subsumed in some way in suggestive images. Many experiences seem to corroborate the thesis that representation rich in images is part of thought and that it would not be merely a verbal description.

The vision systems that we know today are capable of constructing descriptions of the environment that surrounds them, processing and reconstructing images. 

Vision is intimately linked to the idea of computational perception and the fact that the machine recognizes its environment and behaves accordingly. Thus we find the computational visual perception related to the movements of the agents, with their motor coordination, the control of their movements and we cannot stop talking about robotics when addressing this new concept of active vision.

Active Vision (one of the areas of Computer Vision) in real time applications is required in robotics. The active vision system must be simple, with sufficient computational power / complexity to be feasible, efficient and effective in applications that require “online” processing. This simple model can be extended to other areas with the same characteristics (the same model was applied in Computer Animation).

Do you want help with your writing?

We have the best writers in the world to help you

PAYMENT METHODS:

CONTACT US:

support@fulldissertation.com