History of Artificial Intelligence

History of Artificial Intelligence


Artificial Intelligence

Artificial intelligence (AI) may be the latest discipline of sixty years, which may be a set of sciences, theories, and techniques (including symbolic logic, statistics, probabilities, computational neurobiology, computer science) that aims to imitate the cognitive abilities of a person’s being. Initiated within the breath of the Second war, its developments are intimately linked to those of computing and have led computers to perform increasingly complex tasks, which could previously only be delegated to a person.

However, this automation remains faraway from human intelligence within the strict sense, which makes the name hospitable criticism by some experts. The last word stage of their research (a “strong” Artificial Intelligence, i.e. the power to contextualize very different specialized problems autonomously) is completely not like current achievements (“weak” or “moderate” AIs, extremely efficient in their training field). The “strong” Artificial Intelligence, which has only yet materialized in fantasy, would require advances in basic research (not just performance improvements) to be ready to model the planet as an entire.

Since 2010, however, the discipline has experienced a replacement boom, mainly thanks to the considerable improvement within the computing power of computers and access to massive quantities of knowledge.

Promises, renewed, and concerns, sometimes fantasized, complicate an objective understanding of the phenomenon. Brief historical reminders can help to situate the discipline and inform current debates.

 1940-1960: Birth of AI within the wake of cybernetics

The period between 1940 and 1960 was strongly has a clear significance in technology development. Therefore the desire to know the way to compile the functioning of machines and organic beings. For Wiener, a pioneer in cybernetics, the aim was to unify mathematical theory, electronics, and automation as “a whole theory of control and communication, both in animals and machines”. Just before, a primary mathematical and computer model of the biological neuron (formal neuron) had been developed by Warren McCulloch and Walter Pitts as early as 1943.

At the start of 1950, John von Neumann and Turing didn’t create the term Artificial Intelligence. However, they were the founding fathers of the technology behind it. They made the transition from computers to 19th-century decimal logic (which thus addressed values from 0 to 9) and machines to binary logic (which believe Boolean logic, handling more or smaller chains of 0 or 1). the 2 researchers thus formalized the architecture of our contemporary computers and demonstrated that it had been a universal machine, capable of executing what’s programmed.

On the other side, raised the question of the possible intelligence of a machine for the primary time in his famous 1950 article “Computing Machinery and Intelligence” and described a “game of imitation”, where a person should be ready to distinguish during a teletype dialogue whether he’s lecturing a person or a machine.

Discovery of Artificial Intelligence

The term “AI” might be attributed to John McCarthy of MIT (Massachusetts Institute of Technology), which Marvin Minsky (Carnegie-Mellon University) defines as “the construction of computer programs that engage in tasks that are currently more satisfactorily performed by citizenry. Because they require high-level mental processes such as perceptual learning, memory organization and important reasoning. Accordingly, it’s worth noting the success of what wasn’t a conference but rather a workshop. Only six people, including McCarthy and Minsky, had remained consistently present throughout this work (which relied essentially on developments supported formal logic).

While technology remained fascinating and promising. The machines had little or no memory, making it difficult to use a computer-oriented language. However, there have been already some foundations still present today as the answer trees to unravel problems. The IPL, information science language, had thus made it possible to write down as early as 1956 the LTM (logic theorist machine) program which aimed to demonstrate mathematical theorems.

Herbert Simon a sociologist came up with a prediction in 1957 that within 10 years Artificial Intelligence will reach the level of beating human at chess. Though progress of Artificial Intelligence was not the same after that. Simon’s vision proved to be right 30 years later.

1980-1990: Expert systems

In 1968 Kubrick directed the film “2001 Space Odyssey” where a computer – HAL 9000 (only one letter faraway from those of IBM) summarizes in itself the entire sum of ethical questions posed by Artificial Intelligence. Will it represent a high level of sophistication, an honest for humanity or a danger? The impact of the film will naturally not be scientific but it’ll contribute to popularizing the theme, even as the fantasy author Philip K. Dick, who will never cease to wonder if, one day, the machines will experience emotions.

With primary microprocessors discovery in 1970 Artificial Intelligence took off again and entered the golden age of expert systems.

MIT in 1965 with DENDRAL (expert system specialized in molecular chemistry) and at Stanford University in 1972 with MYCIN (system specialized within the diagnosis of blood diseases and prescription drugs) are known as the starting points. These systems have a combination with an “inference engine,” which  work to be a logical mirror of human reasoning. By entering data, the engine provided answers of a high level of experience.

Success of  Expert Systems

The promises foresaw a huge development but the craze will fall again at the top of 1980, early 1990. Scientist of that time explain the programming of experts systems as one of the most difficult programming work. Development and maintenance thus became extremely problematic. In 1990s, the term Artificial Intelligence had almost become taboo. Smaller variations had even entered university language, like “advanced computing”.

The success of Deep Blue (IBM’s expert system) at the chess against Garry Kasparov fulfilled Herbert Simon’s 1957 prophecy. The operation of Deep Blue has supported a scientific brute force algorithm. It was to evaluate all possible moves and weightage here. This defeat of human  has a significant important in history of artificial Intelligence. But Deep Blue was still only able to manage a limited parameter.

Latest Artificial Intelligence Advancements

Since 2010: a replacement bloom supported massive data and new computing power.  Two factors explain the new boom within the discipline around 2010.

– First of all, access to massive volumes of knowledge. Nowa days thereis no need of sampling manually to provide machine with technology. Today, an easy search on Google can find millions.

– Then the invention of the very high efficiency of special effects card processors to accelerate the calculation of learning algorithms. The method is very iterative, it could take weeks before 2010 to process the whole sample. The computing power of those cards has enabled considerable progress at a limited financial cost.

This new technological equipment has enabled some significant public successes and has boosted funding. In 2011, Watson, IBM’s IA, will win the games against 2 Jeopardy champions! . In 2012, Google X is going to be ready to have an AI recognize cats on a video. Quite 16,000 processors are used for this last task, but the potential is extraordinary: a machine learns to differentiate something. In 2016, AlphaGO (Google’s Artificial Intelligence specialized in Go games) will beat the ECU champion (Fan Hui) and therefore the world champion (Lee Sedol) than herself (AlphaGo Zero). Allow us to specify that the sport of Go features combinatorics far more important than chess (more than the number of particles within the universe) which it’s impossible to possess such significant leads to raw strength (as for Deep Blue in 1997).

Deep Learning- Revolutionizing the technology

Where did this miracle come from? an entire paradigm shift from expert systems. The approach has become inductive. It’s not an issue of coding rules as for expert systems. However issue is  of letting computers discover them alone by correlation and classification.

Among machine learning techniques, deep learning seems the foremost promising for a variety of applications (including voice or image recognition). In 2003, Geoffrey Hinton (University of Toronto), Yoshua Bengio (University of Montreal), and Yann LeCun (University of latest York) decided to start a search program to bring neural networks up so far. Experiments conducted simultaneously at Microsoft, Google, and IBM with the assistance of the Toronto laboratory in Hinton. They showed that this sort of learning succeeded in halving the error rates for speech recognition. Hinton’s image recognition team came up with similar results.

Overnight, an outsized majority of research teams turned to the present technology with indisputable benefits. This sort of learning has also enabled considerable progress in text recognition. But, consistent with experts like Yann LeCun, there’s still an extended thanks to attending produce text understanding systems. Conversational agents illustrate this challenge well. Our smartphones already skills to transcribe an instruction but cannot fully contextualize it and analyze our intentions.

Leave a Reply

Your email address will not be published. Required fields are marked *