





To start with, Vinton Cerf and Robert Kahn are credited with the invention of the Internet.
Each computer connected to the Internet is part of a global network in which every computer has a unique address. Internet addresses are in the form of nnn.nnn.nnn.nnn, where nnn is a number from 0 – 255.

When you type a domain name viz. www.sciencecuriosity.tech.blog in your web browser such as Google Chrome, your browser sends a request for hosting web server where the files of the website are located.
When connection gets established, your browser downloads these files from that server in the form of html files or document, as well as images or videos, and renders them on to your computer screen.
This is how internet works. www in the domain names stands for World Wide Web. It acts as identifier for the web browsers.

When an electric current flows through a wire, it generates a magnetic field or magnetic flux all around it & vice versa.

There are 2 coils in a transformer. In the 1st coil, magnetic field or flux is induced by the current. This magnetic flied induced in turn generated electric current across 2nd coil.
See the relationship below. This is how we can either Step Up (increasing voltage) or Step Down (decreasing voltage) by using appropriate no. of turns on Primary & Secondary coils.

Transfer does not produce energy or electricity. It only changes its voltage.
Wright Brothers, Aurville and Wilber were two Americans researchers and aviation pioneers.
These two brothers are generally credited with the discovery, constructing and flying the world’s first successful airplane.


They were passionate about flying. On December 17, 1903, at Kitty Hawk, North Carolina, they completed first controlled, continuous flight of a controlled air plane.
In the year 1904-05, the brothers developed a flying machine which was the first practical fixed wing aircraft.
Although not the first ones to make the modern piloted aircrafts, Wright Brothers were the first ones to develop the flight dynamics which makes the fix-wing-powered aircrafts flight possible.
This is why they are still remembered for their passion.

John McCarthy passed away on October 23, 2011. RIP. In 1956, in a conference organized with Marvin Minsky, Nat Rochester and Claude Shannon, he named his field of study as “artificial intelligence” (AI), although he has often said that if he had to baptize him again he would have preferred to call it “computational intelligence”. “The conference was funded by the Rockefeller Foundation and was called Dartmouth Summer Research Project on Artificial Intelligence.
In 1952, McCarthy suggested to Claude Shannon to call the study of the thinking machines with the name “studies of automatons,” but when preparing in August 1955 the proposal to collect funding from the Rockefeller Foundation for the conference, he thought that a name would be better. with more marketing. The name “machine intelligence” also rung through his mind, but in the end he chose AI.
In the proposal of this conference, McCarthy proposed the study of the development of a new programming language to equip the machines with intelligence (at a time when the most important high-level language was Fortran, a language not very suitable for AI ). The language that was born from the ideas of this conference was LISP (LISt Processing language).

In 1958 John McCarthy and his collaborators in the Massachusetts Institute of Technology created LISP, considered by some the second high-level programming language (after FORTRAN ).
LISP has changed a lot since its inception and has a large number of dialects. LISP is considered the first functional programming language and, depending on the opinions, also declarative programming .
In the year 2002, McCarthy said that “research in AI is quite fragmented. This is good because there are many possible approaches, among which two stand out. On the one hand, the biological approach, based on the idea that humans are intelligent and the AI must study humans and imitate their psychology or physiology.
On the other hand, the formal approach, based on the idea that the study and formalization of the concept of common sense will allow us to make machines become intelligent.
“According to McCarthy the AI is stagnant and” we still need new basic ideas; the understanding of intelligence is a very difficult scientific problem. I can not predict how long it will take for the intelligence of the machines to reach the human level, maybe 50 years, maybe 500 years, who knows.
Fedora is one of the most used Linux distributions today, it does not reach the level of popularity of Ubuntu but it is quite used and adaptable to any type of environment, thanks to the three flavors it offers, Workstation , Server and Atomic . ”
Fedora is a Linux distribution based on RPM , which is characterized as a stable system. RPM is a package management tool that is maintained by the international community of engineers, graphic designers and users. This tool is the one used by other distros such as Mageia, PCLinuxOS or openSUSE .

Many of you might have seen the famous Hollywood movie series Terminator. May be some of you might not have seen the movie Terminator but all of you would have seen superstar Rajnikanth’s movie Robot. In these films robots are shown as intelligent beings. They have brains and take independent decisions. But how exactly these robots do that. Answer is Artificial Intelligence. What it is and how it works, we will discuss.
Before coming to Artificially Intelligence (AI), one needs to know what intelligence is.
What is Intelligence?
By definition, it is the ability to calculate, reason, perceive relationships and analogies, learn from experience, store and retrieve information from memory, solve problems, comprehend complex ideas, use natural language fluently, classify, generalize, and adapt new situations.
In general terms, it’s all those thing that human brain does.
Artificial Intelligence (A.I.) is a way of making a computer, a computer-controlled robot, or a software think intelligently, in the similar manner the intelligent humans think.
AI is accomplished by studying how human brain thinks, and how humans learn, decide, and work while trying to solve a problem, and then using the outcomes of this study as a basis of developing intelligent software and systems.
It is not mere programing a robot we are talking about. AI basically goes much beyond that. It works on fuzzy logics, data mapping and screening and making decisions.
Artificial Intelligence (AI) is the study and creation of computer systems that can perceive, reason and act. The primary aim of AI is to produce intelligent machines. The intelligence should be exhibited by thinking, making decisions, solving problems, more importantly by learning. AI is an interdisciplinary field that requires knowledge in computer science, linguistics, psychology, biology, philosophy and so on for serious research.
According to the father of Artificial Intelligence, John McCarthy, it is “The science and engineering of making intelligent machines, especially intelligent computer programs”.
Artificial Intelligence is a way of [Making a Computer, a Computer-Controlled Robot, or a Software Think Intelligently,] in the similar manner the intelligent humans think.
AI can also be defined as the area of computer science that deals with the ways in which computers can be made to perform cognitive functions ascribed to humans. But this definition does not say what functions are performed, to what degree they are performed, or how these functions are carried out.
AI draws heavily on following domains of study.
1. Computer Science
2. Cognitive Science
3. Engineering
4. Ethics
5. Linguistics
6. Logic IPO
7. Mathematics
8. Natural Sciences
9. Philosophy
10. Physiology
11. Psychology
12. Statistics
● STRONG ARTIFICIAL INTELLIGENCE
It deals with creation of real intelligence artificially. Strong AI believes that machines can be made sentient or self-aware. There are two types of strong AI: Human-like AI, in which the computer program thinks and reasons to the level of human-being. Non-human-like AI, in which the computer program develops a non-human way of thinking and reasoning.
● WEAK ARTIFICIAL INTELLIGENCE
Weak AI does not believe that creating human-level intelligence in machines is possible but AI techniques can be developed to solve many real-life problems. That is, it is the study of mental models implemented on a computer.
● A.I AND NATURE
Nowadays AI techniques developed with the inspiration from nature is becoming popular. A new area of research what is known as Nature Inspired Computing is emerging. Biologically inspired AI approaches such as neural networks and genetic algorithms are already in place.
CHALLENGES
It is true that AI does not yet achieve its ultimate goal. Still AI systems could not defeat even a three year old child on many counts: ability to recognize and remember different objects, adapt to new situations, understand and generate human languages, and so on. The main problem is that we, still could not understand how human mind works, how we learn new things, especially how we learn languages and reproduce them properly.
APPLICATIONS
There are many AI applications that we witness: Robotics, Machine translators, chatbots, voice recognizers to name a few. AI techniques are used to solve many real life problems. Some kind of robots are helping to find land-mines, searching humans trapped in rubbles due to natural calamities.

FUTURE OF AI
AI is the best field for dreamers to play around. It must be evolved from the thought that making a human-machine is possible. Though many conclude that this is not possible, there is still a lot of research going on in this field to attain the final objective. There are inherent advantages of using computers as they do not get tired or losing temper and are becoming faster and faster. Only time will say what will be the future of AI: will it attain human-level or above human-level intelligence or not.
☆ Philosophy of AI
While exploiting the power of the computer systems, the curiosity of human, lead him to wonder, “Can a machine think and behave like humans do?”
Thus, the development of AI started with the intention of creating similar intelligence in machines that we find and regard high in humans.
☆ History of AI
Here is the history of AI during 20th century −
Year
Milestone / Innovation
1923
Karel Čapek play named “Rossum’s Universal Robots” (RUR) opens in London, first use of the word “robot” in English.
1943
Foundations for neural networks laid.
1945
Isaac Asimov, a Columbia University alumni, coined the term Robotics.
1950
Alan Turing introduced Turing Test for evaluation of intelligence and published Computing Machinery and Intelligence. Claude Shannon published Detailed Analysis of Chess Playing as a search.
1956
John McCarthy coined the term Artificial Intelligence. Demonstration of the first running AI program at Carnegie Mellon University.
1958
John McCarthy invents LISP programming language for AI.
1964
Danny Bobrow’s dissertation at MIT showed that computers can understand natural language well enough to solve algebra word problems correctly.
1965
Joseph Weizenbaum at MIT built ELIZA, an interactive problem that carries on a dialogue in English.
1969
Scientists at Stanford Research Institute Developed Shakey, a robot, equipped with locomotion, perception, and problem solving.
1973
The Assembly Robotics group at Edinburgh University built Freddy, the Famous Scottish Robot, capable of using vision to locate and assemble models.
1979
The first computer-controlled autonomous vehicle, Stanford Cart, was built.
1985
Harold Cohen created and demonstrated the drawing program, Aaron.
1990
Major advances in all areas of AI −
● Significant demonstrations in machine learning
● Case-based reasoning
● Multi-agent planning
● Scheduling
● Data mining, Web Crawler
● natural language understanding and translation
● Vision, Virtual Reality
● Games
1997
The Deep Blue Chess Program beats the then world chess champion, Garry Kasparov.
2000
Interactive robot pets become commercially available. MIT displays Kismet, a robot with a face that expresses emotions. The robot Nomad explores remote regions of Antarctica and locates meteorites.
☆ Goals of AI
● To Create Expert Systems − The systems which exhibit intelligent behavior, learn, demonstrate, explain, and advise its users.
● To Implement Human Intelligence in Machines − Creating systems that understand, think, learn, and behave like humans.
☆ What Contributes to AI ?
Artificial intelligence is a science and technology based on disciplines such as Computer Science, Biology, Psychology, Linguistics, Mathematics, and Engineering. A major thrust of AI is in the development of computer functions associated with human intelligence, such as reasoning, learning, and problem solving.
Out of the following areas, one or multiple areas can contribute to build an intelligent system.
☆ Programming Without and With AI
The programming without and with AI is different in following ways −
Programming Without AI
Programming With AI
1: A computer program without AI can answer the specific questions it is meant to solve.
1: A computer program with AI can answer the generic questions it is meant to solve.
2: Modification in the program leads to change in its structure.
2: AI programs can absorb new modifications by putting highly independent pieces of information together. Hence you can modify even a minute piece of information of program without affecting its structure.
3: Modification is not quick and easy. It may lead to affecting the program adversely.
3: Quick and Easy program modification.
☆What is AI Technique ?
In the real world, the knowledge has some unwelcomed properties −
● Its volume is huge, next to unimaginable.
● It is not well-organized or well-formatted.
● It keeps changing constantly.
AI Technique is a manner to organize and use the knowledge efficiently in such a way that −
● It should be perceivable by the people who provide it.
● It should be easily modifiable to correct errors.
● It should be useful in many situations though it is incomplete or inaccurate.
AI techniques elevate the speed of execution of the complex program it is equipped with.
☆ Applications of AI
AI has been dominant in various fields such as −
● Gaming − AI plays crucial role in strategic games such as chess, poker, tic-tac-toe, etc., where machine can think of large number of possible positions based on heuristic knowledge.
● Natural Language Processing − It is possible to interact with the computer that understands natural language spoken by humans.
● Expert Systems − There are some applications which integrate machine, software, and special information to impart reasoning and advising. They provide explanation and advice to the users.
● Vision Systems − These systems understand, interpret, and comprehend visual input on the computer. For example,
○ A spying aeroplane takes photographs, which are used to figure out spatial information or map of the areas.
○ Doctors use clinical expert system to diagnose the patient.
○ Police use computer software that can recognize the face of criminal with the stored portrait made by forensic artist.
● Speech Recognition − Some intelligent systems are capable of hearing and comprehending the language in terms of sentences and their meanings while a human talks to it. It can handle different accents, slang words, noise in the background, change in human’s noise due to cold, etc.
● Handwriting Recognition − The handwriting recognition software reads the text written on paper by a pen or on screen by a stylus. It can recognize the shapes of the letters and convert it into editable text.
● Intelligent Robots − Robots are able to perform the tasks given by a human. They have sensors to detect physical data from the real world such as light, heat, temperature, movement, sound, bump, and pressure. They have efficient processors, multiple sensors and huge memory, to exhibit intelligence. In addition, they are capable of learning from their mistakes and they can adapt to the new environment.

Here is the list of frequently used terms in the domain of AI:
| Term | Meaning |
| Agent | Agents are systems or software programs capable of autonomous, purposeful and reasoning directed towards one or more goals. They are also called assistants, brokers, bots, droids, intelligent agents, and software agents. |
| Autonomous Robot | Robot free from external control or influence and able to control itself independently. |
| Backward Chaining | Strategy of working backward for Reason/Cause of a problem. |
| Blackboard | It is the memory inside computer, which is used for communication between the cooperating expert systems. |
| Environment | It is the part of real or computational world inhabited by the agent. |
| Forward Chaining | Strategy of working forward for conclusion/solution of a problem. |
| Heuristics | It is the knowledge based on Trial-and-error, evaluations, and experimentation. |
| Knowledge Engineering | Acquiring knowledge from human experts and other resources. |
| Percepts | It is the format in which the agent obtains information about the environment. |
| Pruning | Overriding unnecessary and irrelevant considerations in AI systems. |
| Rule | It is a format of representing knowledge base in Expert System. It is in the form of IF-THEN-ELSE. |
| Shell | A shell is a software that helps in designing inference engine, knowledge base, and user interface of an expert system. |
| Task | It is the goal the agent is tries to accomplish. |
| Turing Test | A test developed by Allan Turing to test the intelligence of a machine as compared to human intelligence. |
The ability of a system to calculate, reason, perceive relationships and analogies, learn from experience, store and retrieve information from memory, solve problems, comprehend complex ideas, use natural language fluently, classify, generalize, and adapt new situations.
As described by Howard Gardner, an American developmental psychologist, the Intelligence comes in multifold −
| Intelligence | Description | Example |
| Linguistic intelligence | The ability to speak, recognize, and use mechanisms of phonology (speech sounds), syntax (grammar), and semantics (meaning). | Narrators, Orators |
| Musical intelligence | The ability to create, communicate with, and understand meanings made of sound, understanding of pitch, rhythm. | Musicians, Singers, Composers |
| Logical-mathematical intelligence | The ability of use and understand relationships in the absence of action or objects. Understanding complex and abstract ideas. | Mathematicians, Scientists |
| Spatial intelligence | The ability to perceive visual or spatial information, change it, and re-create visual images without reference to the objects, construct 3D images, and to move and rotate them. | Map readers, Astronauts, Physicists |
| Bodily-Kinesthetic intelligence | The ability to use complete or part of the body to solve problems or fashion products, control over fine and coarse motor skills, and manipulate the objects. | Players, Dancers |
| Intra-personal intelligence | The ability to distinguish among one’s own feelings, intentions, and motivations. | Gautam Buddhha |
| Interpersonal intelligence | The ability to recognize and make distinctions among other people’s feelings, beliefs, and intentions. | Mass Communicators, Interviewers |
You can say a machine or a system is artificially intelligent when it is equipped with at least one and at most all intelligences in it.
The intelligence is intangible. It is composed of −
● Reasoning
● Learning
● Problem Solving
● Perception
● Linguistic Intelligence

Let us go through all the components briefly −
● Reasoning − It is the set of processes that enables us to provide basis for judgement, making decisions, and prediction. There are broadly two types −
| Inductive Reasoning | Deductive Reasoning |
| It conducts specific observations to makes broad general statements. | It starts with a general statement and examines the possibilities to reach a specific, logical conclusion. |
| Even if all of the premises are true in a statement, inductive reasoning allows for the conclusion to be false. | If something is true of a class of things in general, it is also true for all members of that class. |
| Example − “Nita is a teacher. All teachers are studious. Therefore, Nita is studious.” | Example − “All women of age above 60 years are grandmothers. Shalini is 65 years. Therefore, Shalini is a grandmother.” |
● Learning − It is the activity of gaining knowledge or skill by studying, practising, being taught, or experiencing something. Learning enhances the awareness of the subjects of the study.
● The ability of learning is possessed by humans, some animals, and AI-enabled systems. Learning is categorized as −
○ Auditory Learning − It is learning by listening and hearing. For example, students listening to recorded audio lectures.
○ Episodic Learning − To learn by remembering sequences of events that one has witnessed or experienced. This is linear and orderly.
○ Motor Learning − It is learning by precise movement of muscles. For example, picking objects, Writing, etc.
○ Observational Learning − To learn by watching and imitating others. For example, child tries to learn by mimicking her parent.
○ Perceptual Learning − It is learning to recognize stimuli that one has seen before. For example, identifying and classifying objects and situations.
○ Relational Learning − It involves learning to differentiate among various stimuli on the basis of relational properties, rather than absolute properties. For Example, Adding ‘little less’ salt at the time of cooking potatoes that came up salty last time, when cooked with adding say a tablespoon of salt.
○ Spatial Learning − It is learning through visual stimuli such as images, colors, maps, etc. For Example, A person can create roadmap in mind before actually following the road.
○ Stimulus-Response Learning − It is learning to perform a particular behavior when a certain stimulus is present. For example, a dog raises its ear on hearing doorbell.
● Problem Solving − It is the process in which one perceives and tries to arrive at a desired solution from a present situation by taking some path, which is blocked by known or unknown hurdles.
● Problem solving also includes decision making, which is the process of selecting the best suitable alternative out of multiple alternatives to reach the desired goal are available.
● Perception − It is the process of acquiring, interpreting, selecting, and organizing sensory information.
● Perception presumes sensing. In humans, perception is aided by sensory organs. In the domain of AI, perception mechanism puts the data acquired by the sensors together in a meaningful manner.
● Linguistic Intelligence − It is one’s ability to use, comprehend, speak, and write the verbal and written language. It is important in interpersonal communication.
● Humans perceive by patterns whereas the machines perceive by set of rules and data.
● Humans store and recall information by patterns, machines do it by searching algorithms. For example, the number 40404040 is easy to remember, store, and recall as its pattern is simple.
● Humans can figure out the complete object even if some part of it is missing or distorted; whereas the machines cannot do it correctly.
The domain of artificial intelligence is huge in breadth and width. While proceeding, we consider the broadly common and prospering research areas in the domain of AI −

The user input spoken at a microphone goes to sound card of the system. The converter turns the analog signal into equivalent digital signal for the speech processing. The database is used to compare the sound patterns to recognize the words. Finally, a reverse feedback is given to the database.
This source-language text becomes input to the Translation Engine, which converts it to the target language text. They are supported with interactive GUI, large database of vocabulary, etc.
There is a large array of applications where AI is serving common people in their day-to-day lives −
AI is developing with such an incredible speed, sometimes it seems magical. There is an opinion among researchers and developers that AI could grow so immensely strong that it would be difficult for humans to control.
Humans developed AI systems by introducing into them every possible intelligence they could, for which the humans themselves now seem threatened.
An AI program that recognizes speech and understands natural language is theoretically capable of understanding each conversation on e-mails and telephones.
AI systems have already started replacing the human beings in few industries. It should not replace people in the sectors where they are holding dignified positions which are pertaining to ethics such as nursing, surgeon, judge, police officer, etc.
The self-improving AI systems can become so mighty than humans that could be very difficult to stop from achieving their goals, which may lead to unintended consequences.
There are two components of NLP as given −
Understanding involves the following tasks −
● Mapping the given input in natural language into useful representations.
● Analyzing different aspects of the language.
It is the process of producing meaningful phrases and sentences in the form of natural language from some internal representation.
It involves −
● Text planning − It includes retrieving the relevant content from knowledge base.
● Sentence planning − It includes choosing required words, forming meaningful phrases, setting tone of the sentence.
● Text Realization − It is mapping sentence plan into sentence structure.
The NLU is harder than NLG.
● Phonology − It is study of organizing sound systematically.
● Morphology − It is a study of construction of words from primitive meaningful units.
● Morpheme − It is primitive unit of meaning in a language.
● Syntax − It refers to arranging words to make a sentence. It also involves determining the structural role of words in the sentence and in phrases.
● Semantics − It is concerned with the meaning of words and how to combine words into meaningful phrases and sentences.
● Pragmatics − It deals with using and understanding sentences in different situations and how the interpretation of the sentence is affected.
● Discourse − It deals with how the immediately preceding sentence can affect the interpretation of the next sentence.
● World Knowledge − It includes the general knowledge about the world
There are general five steps −
● Lexical Analysis − It involves identifying and analyzing the structure of words. Lexicon of a language means the collection of words and phrases in a language. Lexical analysis is dividing the whole chunk of txt into paragraphs, sentences, and words.
● Syntactic Analysis (Parsing) − It involves analysis of words in the sentence for grammar and arranging words in a manner that shows the relationship among the words. The sentence such as “The school goes to boy” is rejected by English syntactic analyzer.

● Semantic Analysis − It draws the exact meaning or the dictionary meaning from the text. The text is checked for meaningfulness. It is done by mapping syntactic structures and objects in the task domain. The semantic analyzer disregards sentence such as “hot ice-cream”.
● Discourse Integration − The meaning of any sentence depends upon the meaning of the sentence just before it. In addition, it also brings about the meaning of immediately succeeding sentence.
● Pragmatic Analysis − During this, what was said is re-interpreted on what it actually meant. It involves deriving those aspects of language which require real world knowledge.
NL has an extremely rich form and structure.
It is very ambiguous. There can be different levels of ambiguity −
● Lexical ambiguity − It is at very primitive level such as word-level.
● For example, treating the word “board” as noun or verb?
● Syntax Level ambiguity − A sentence can be parsed in different ways.
● For example, “He lifted the beetle with red cap.” − Did he use cap to lift the beetle or he lifted a beetle that had red cap?
● Referential ambiguity − Referring to something using pronouns. For example, Rima went to Gauri. She said, “I am tired.” − Exactly who is tired?
● One input can mean different meanings.
● Many inputs can mean the same thing.
Robotics is a domain in artificial intelligence that deals with the study of creating intelligent and efficient robots.
Robots are the artificial agents acting in real world environment.
Robots are aimed at manipulating the objects by perceiving, picking, moving, modifying the physical properties of object, destroying it, or to have an effect thereby freeing manpower from doing repetitive functions without getting bored, distracted, or exhausted.
Robotics is a branch of AI, which is composed of Electrical Engineering, Mechanical Engineering, and Computer Science for designing, construction, and application of robots.
● The robots have mechanical construction, form, or shape designed to accomplish a particular task.
● They have electrical components which power and control the machinery.
● They contain some level of computer program that determines what, when and how a robot does something.
Here is the difference between the two −
| AI Programs | Robots |
| They usually operate in computer-stimulated worlds. | They operate in real physical world |
| The input to an AI program is in symbols and rules. | Inputs to robots is analog signal in the form of speech waveform or images |
| They need general purpose computers to operate on. | They need special hardware with sensors and effectors. |
Locomotion is the mechanism that makes a robot capable of moving in its environment. There are various types of locomotions −
● Legged
● Wheeled
● Combination of Legged and Wheeled Locomotion
● Tracked slip/skid
● This type of locomotion consumes more power while demonstrating walk, jump, trot, hop, climb up or down, etc.
● It requires more number of motors to accomplish a movement. It is suited for rough as well as smooth terrain where irregular or too smooth surface makes it consume more power for a wheeled locomotion. It is little difficult to implement because of stability issues.
● It comes with the variety of one, two, four, and six legs. If a robot has multiple legs then leg coordination is necessary for locomotion.
The total number of possible gaits (a periodic sequence of lift and release events for each of the total legs) a robot can travel depends upon the number of its legs.
If a robot has k legs, then the number of possible events N = (2k-1)!.
In case of a two-legged robot (k=2), the number of possible events is N = (2k-1)! = (2*2-1)! = 3! = 6.
Hence there are six possible different events −
● Lifting the Left leg
● Releasing the Left leg
● Lifting the Right leg
● Releasing the Right leg
● Lifting both the legs together
● Releasing both the legs together
In case of k=6 legs, there are 39916800 possible events. Hence the complexity of robots is directly proportional to the number of legs.

It requires fewer number of motors to accomplish a movement. It is little easy to implement as there are less stability issues in case of more number of wheels. It is power efficient as compared to legged locomotion.
● Standard wheel − Rotates around the wheel axle and around the contact
● Castor wheel − Rotates around the wheel axle and the offset steering joint.
● Swedish 45° and Swedish 90° wheels − Omni-wheel, rotates around the contact point, around the wheel axle, and around the rollers.
● Ball or spherical wheel − Omnidirectional wheel, technically difficult to implement.

In this type, the vehicles use tracks as in a tank. The robot is steered by moving the tracks with different speeds in the same or opposite direction. It offers stability because of large contact area of track and ground.

Robots are constructed with the following −
● Power Supply − The robots are powered by batteries, solar power, hydraulic, or pneumatic power sources.
● Actuators − They convert energy into movement.
● Electric motors (AC/DC) − They are required for rotational movement.
● Pneumatic Air Muscles − They contract almost 40% when air is sucked in them.
● Muscle Wires − They contract by 5% when electric current is passed through them.
● Piezo Motors and Ultrasonic Motors − Best for industrial robots.
● Sensors − They provide knowledge of real time information on the task environment. Robots are equipped with vision sensors to be to compute the depth in the environment. A tactile sensor imitates the mechanical properties of touch receptors of human fingertips.
The robotics has been instrumental in the various domains such as −
● Industries − Robots are used for handling material, cutting, welding, color coating, drilling, polishing, etc.
● Military − Autonomous robots can reach inaccessible and hazardous zones during war. A robot named Daksh, developed by Defense Research and Development Organization (DRDO), is in function to destroy life-threatening objects safely.
● Medicine − The robots are capable of carrying out hundreds of clinical tests simultaneously, rehabilitating permanently disabled people, and performing complex surgeries such as brain tumors.
● Exploration − The robot rock climbers used for space exploration, underwater drones used for ocean exploration are to name a few.
● Entertainment − Disney’s engineers have created hundreds of robots for movie making.
This is a technology of AI with which the robots can see. The computer vision plays vital role in the domains of safety, security, health, access, and entertainment.
Computer vision automatically extracts, analyzes, and comprehends useful information from a single image or an array of images. This process involves development of algorithms to accomplish automatic visual comprehension.
This involves −
● Image acquisition device such as camera
● a processor
● a software
● A display device for monitoring the system
● Accessories such as camera stands, cables, and connector
● Agriculture
● Autonomous vehicles
● Biometrics
● Character recognition
● Forensics, security, and surveillance
● Industrial quality inspection
● Face recognition
● Gesture analysis
● Geoscience
● Medical imagery
● Pollution monitoring
● Process control
● Remote sensing
● Robotics
● Transport
Yet another research area in AI, neural networks, is inspired from the natural neural network of human nervous system.
The inventor of the first neurocomputer, Dr. Robert Hecht-Nielsen, defines a neural network as −
“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”
The idea of ANNs is based on the belief that working of human brain by making the right connections, can be imitated using silicon and wires as living neurons and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They are connected to other thousand cells by Axons. Stimuli from external environment or inputs from sensory organs are accepted by dendrites. These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue or does not send it forward.

ANNs are composed of multiple nodes, which imitate biological neurons of human brain. The neurons are connected by links and they interact with each other. The nodes can take input data and perform simple operations on the data. The result of these operations is passed to other neurons. The output at each node is called its activation or node value.
Each link is associated with weight. ANNs are capable of learning, which takes place by altering weight values. The following illustration shows a simple ANN −

Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as “cat” or “no cat” and using the analytic results to identify cats in other images. They have found most use in applications difficult to express in a traditional computer algorithm using rule-based programming.
Neural networks are parallel computing devices, which is basically an attempt to make a computer model of the brain. The main objective is to develop a system to perform various computational tasks faster than the traditional systems. These tasks include pattern recognition and classification, approximation, optimization, and data clustering.
What is Artificial Neural Network?
Artificial Neural Network (ANN) is an efficient computing system whose central theme is borrowed from the analogy of biological neural networks. ANNs are also named as “artificial neural systems,” or “parallel distributed processing systems,” or “connectionist systems.” ANN acquires a large collection of units that are interconnected in some pattern to allow communication between the units. These units, also referred to as nodes or neurons, are simple processors which operate in parallel.
Every neuron is connected with other neuron through a connection link. Each connection link is associated with a weight that has information about the input signal. This is the most useful information for neurons to solve a particular problem because the weight usually excites or inhibits the signal that is being communicated. Each neuron has an internal state, which is called an activation signal. Output signals, which are produced after combining the input signals and activation rule, may be sent to other units.
A Brief History of ANN
The history of ANN can be divided into the following three eras −
ANN during 1940s to 1960s
Some key developments of this era are as follows −
· 1943 − It has been assumed that the concept of neural network started with the work of physiologist, Warren McCulloch, and mathematician, Walter Pitts, when in 1943 they modeled a simple neural network using electrical circuits in order to describe how neurons in the brain might work.
· 1949 − Donald Hebb’s book, The Organization of Behavior, put forth the fact that repeated activation of one neuron by another increases its strength each time they are used.
· 1956 − An associative memory network was introduced by Taylor.
· 1958 − A learning method for McCulloch and Pitts neuron model named Perceptron was invented by Rosenblatt.
· 1960 − Bernard Widrow and Marcian Hoff developed models called “ADALINE” and “MADALINE.”
ANN during 1960s to 1980s
Some key developments of this era are as follows −
· 1961 − Rosenblatt made an unsuccessful attempt but proposed the “backpropagation” scheme for multilayer networks.
· 1964 − Taylor constructed a winner-take-all circuit with inhibitions among output units.
· 1969 − Multilayer perceptron (MLP) was invented by Minsky and Papert.
· 1971 − Kohonen developed Associative memories.
· 1976 − Stephen Grossberg and Gail Carpenter developed Adaptive resonance theory.
ANN from 1980s till Present
Some key developments of this era are as follows −
· 1982 − The major development was Hopfield’s Energy approach.
· 1985 − Boltzmann machine was developed by Ackley, Hinton, and Sejnowski.
· 1986 − Rumelhart, Hinton, and Williams introduced Generalised Delta Rule.
· 1988 − Kosko developed Binary Associative Memory (BAM) and also gave the concept of Fuzzy Logic in ANN.
The historical review shows that significant progress has been made in this field. Neural network based chips are emerging and applications to complex problems are being developed. Surely, today is a period of transition for neural network technology.
Biological Neuron
A nerve cell (neuron) is a special biological cell that processes information. According to an estimation, there are huge number of neurons, approximately 1011 with numerous interconnections, approximately 1015.
Schematic Diagram

Working of a Biological Neuron
As shown in the above diagram, a typical neuron consists of the following four parts with the help of which we can explain its working −
· Dendrites − They are tree-like branches, responsible for receiving the information from other neurons it is connected to. In other sense, we can say that they are like the ears of neuron.
· Soma − It is the cell body of the neuron and is responsible for processing of information, they have received from dendrites.
· Axon − It is just like a cable through which neurons send the information.
· Synapses − It is the connection between the axon and other neuron dendrites.
ANN versus BNN
Before taking a look at the differences between Artificial Neural Network (ANN) and Biological Neural Network (BNN), let us take a look at the similarities based on the terminology between these two.
| Biological Neural Network (BNN) | Artificial Neural Network (ANN) |
| Soma | Node |
| Dendrites | Input |
| Synapse | Weights or Interconnections |
| Axon | Output |
The following table shows the comparison between ANN and BNN based on some criteria mentioned.
| Criteria | BNN | ANN |
| Processing | Massively parallel, slow but superior than ANN | Massively parallel, fast but inferior than BNN |
| Size | 1011 neurons and 1015interconnections | 102 to 104 nodes (mainly depends on the type of application and network designer) |
| Learning | They can tolerate ambiguity | Very precise, structured and formatted data is required to tolerate ambiguity |
| Fault tolerance | Performance degrades with even partial damage | It is capable of robust performance, hence has the potential to be fault tolerant |
| Storage capacity | Stores the information in the synapse | Stores the information in continuous memory locations |
Fermentation is the name of process by which milk changes into curd. Fermentation is a chemical process that usually is initiated in the presence of bacteria. These bacteria produce certain chemicals that act as catalysts for fermentation processes.

The bacteria that convert milk into curd, are called lactobacillus. These bacteria convert lactose (present in the milk) into lactic acid.
The formation of curd from milk is a chemical reaction and it is irreversible, that is, once the milk has been converted into curd, it cannot be changed back into milk.

Processes happening around us-
There are two types of processes that generally occur in nature. Chemical processes & Physical processes.
Chemical Reactions/ processes- These are the process in which-
Molecular or ionic structure of a single compound changes (only 1 substance, as conversion of milk into curd) OR
Reactions between two or more compounds in which the molecular or ionic structures of substances are changed and new compounds are formed post reaction.
As a result chemical processes, properties of matter formed are different from the initial substances. For example, the formation of curd by milk, burning of substances etc.
Physical processes– There are one other type of processes that occur in nature- physical processes. Under such processes, physical or molecular properties of materials do not change. Examples of physical processes are inter-conversions of water into ice or steam. Thermal expansions of metals etc.
Every time you strike a match, burn a candle, build a fire, or light a grill, you see a chemical reaction in action called combustion.

During combustion or burning of things, carbon molecules in body react with oxygen to produce carbon dioxide and water. During combustion processes, energy is released.
For example, the combustion reaction of propane, found in gas grills and some fireplaces, is:
C3H8 + 5O2 → 4H2O + 3CO2 + energy

Over time, Iron develops a red, flaky coating called rust- all of you would have seen it. This actually is an example of an oxidation reaction. Other everyday examples include tarnishing of silver.

Here is the chemical equation for the rusting of iron:
Fe + O2 + H2O → Fe2O3. XH2O
Note: That burning of things too actually is an example of oxidation reaction.
It’s true that diamond, graphite and soot are all made of carbon and are in fact are pure forms of carbon. Let’s see how this is possible.
This property of substance in which the same substance is found in two different forms, is called Allotropy. This is just like the way as it is with ice and water. Ice and water both are same thing except in different forms or phases due to temp. But here, in the case of Allotropy, the difference is not of temperature but chemical (molecular) composition. Let us see what this actually means.

Graphite has a layered structure. Each carbon atom joins the other three carbon atoms to form the hexagonal ring structure. Such ring structures together form a flaky structure. The fourth electron of each carbon remains in free state due to weakness of the attraction force between the two layers, one layer may slip easily on the second layer, so the graphite is soft.

The carbon atoms in diamond, on the other hand, have strong bonds in three dimensions. In diamond, the atoms are very closely packed and each atom is connected to four other carbon atoms, giving it a very strong and rigid structure in three dimensions.
The carbon black or the soot of carbon is the crystalline composition. This is a powder.

Located in Mehroli, Delhi, the Iron pillar is a huge pillar near the Qutub Minar. It is one of the miracles and in a way displays the pinnacle of ancient Indian metallurgy in itself.

The pillar weighs over 6,000 and is thought to have originally been erected in what is now Udayagiri by one of the Gupta monarchs in approximately 402 CE, though the precise date and location are a matter of dispute.
There is ample amount of Phosphorus (0.114%)& small amounts of Sulpher (0.006%) in this column. Further, there is a coating of Manganese Oxide (MnO2) which keeps it safe. This is the reason that the Iron pillar does not rust.
Rusting- Rust is actually a substance that is formed by the reaction of iron with the oxygen present in air. Ferric oxide is formed by this reaction (which is called jung in the common language).
Jung is very dangerous for any iron structure because it makes the structure very weak and brittle. Iron structures are painted to avoid rusting.
You must be logged in to post a comment.