ARTIFICIAL INTELLIGENCE
Artificial intelligence (AI) is intelligence exhibited by machines. In computer science,
the field of AI research defines itself as the study of "intelligent agents": any device
that perceives its environment and takes actions that maximize its chance of
success at some goal. Colloquially, the term
"artificial intelligence" is applied when a machine mimics
"cognitive" functions that humans associate with other human minds,
such as "learning" and "problem solving" (known as Machine Learning).
As machines become increasingly capable, mental facilities once thought to
require intelligence are removed from the definition. For instance, optical character recognition is no longer perceived as an example of
"artificial intelligence", having become a routine technology Capabilities currently classified as
AI include successfully understanding human speech, competing at a high level in strategic game systems (such as Chess and Go), self-driving cars,
intelligent routing in content delivery networks,
and interpreting complex data. AI research is divided into subfields that focus on specific problems or on specific approaches or on the use of a particular tool or towards satisfying particular applications. The central problems
(or goals) of AI research include reasoning, knowledge, planning, learning, natural language processing (communication), perception and the ability to move and manipulate
objects. General intelligence is among the field's long-term goals.
Approaches include statistical
methods, computational
intelligence, and traditional
symbolic AI. Many tools are used in AI, including versions of search
and mathematical optimization, logic, methods
based on probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, neuroscience and artificial psychology. The field was founded on the claim that human intelligence "can be so precisely described
that a machine can be made to simulate it". This raises philosophical arguments
about the nature of the mind and the ethics of creating artificial
beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity. Some people also consider AI a danger
to humanity if it progresses unabatedly. Attempts
to create artificial intelligence have experienced many setbacks,
including the ALPAC report of
1966, the abandonment of perceptrons in 1970, the Lighthill Report of
1973, the second AI winter 1987–1993 and the collapse of the Lisp machine market in 1987. In the twenty-first century, AI techniques,
both "hard" and "soft", have experienced a resurgence
following concurrent advances in computer
power, sizes of training
sets, and theoretical understanding, and AI techniques have become an
essential part of the technology industry, helping to solve many
challenging problems in computer science.
History
While thought-capable artificial beings appeared as storytelling devices in antiquity, the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots). The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. In the 19th century, George Boole refined those ideas into propositional logic and Gottlob Frege developed a notational system for mechanical reasoning (a "predicate calculus"). Around the 1940s, Alan Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was "born" at a conference at Dartmouth College in 1956. Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research. At the conference, Newell and Simon, together with programmer J. C. Shaw (RAND), presented the first true artificial intelligence program, the Logic Theorist. This spurred tremendous research in the domain: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do." Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter", a period when funding for AI projects was hard to find. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began. In the late 1990's and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997. Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception. By the mid 2010s, machine learning applications were used throughout the world. In a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increasing from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.
Goals
The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention. Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.
Reasoning, problem solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions (reason). By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. For difficult problems, algorithms can require enormous computational resources—most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority. Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model. AI has progressed using "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability.
Knowledge representation
Main articles: Knowledge representation and Commonsense knowledge
History
While thought-capable artificial beings appeared as storytelling devices in antiquity, the idea of actually trying to build a machine to perform useful reasoning may have begun with Ramon Llull (c. 1300 CE). With his Calculus ratiocinator, Gottfried Leibniz extended the concept of the calculating machine (Wilhelm Schickard engineered the first one around 1623), intending to perform operations on concepts rather than numbers. Since the 19th century, artificial beings are common in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots). The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. In the 19th century, George Boole refined those ideas into propositional logic and Gottlob Frege developed a notational system for mechanical reasoning (a "predicate calculus"). Around the 1940s, Alan Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the Church–Turing thesis. Along with concurrent discoveries in neurology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. The first work that is now generally recognized as AI was McCullouch and Pitts' 1943 formal design for Turing-complete "artificial neurons". The field of AI research was "born" at a conference at Dartmouth College in 1956. Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research. At the conference, Newell and Simon, together with programmer J. C. Shaw (RAND), presented the first true artificial intelligence program, the Logic Theorist. This spurred tremendous research in the domain: computers were winning at checkers, solving word problems in algebra, proving logical theorems and speaking English. By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense and laboratories had been established around the world. AI's founders were optimistic about the future: Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do." Marvin Minsky agreed, writing, "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved." They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an "AI winter", a period when funding for AI projects was hard to find. In the early 1980s, AI research was revived by the commercial success of expert systems, a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S and British governments to restore funding for academic research. However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began. In the late 1990's and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas The success was due to increasing computational power (see Moore's law), greater emphasis on solving specific problems, new ties between AI and other fields and a commitment by researchers to mathematical methods and scientific standards. Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997. Advanced statistical techniques (loosely known as deep learning), access to large amounts of data and faster computers enabled advances in machine learning and perception. By the mid 2010s, machine learning applications were used throughout the world. In a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. The Kinect, which provides a 3D body–motion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research as do intelligent personal assistants in smartphones. In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. According to Bloomberg's Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increasing from a "sporadic usage" in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011. He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets. Other cited examples include Microsoft's development of a Skype system that can automatically translate from one language to another and Facebook's system that can describe images to blind people.
Goals
The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention. Erik Sandwell emphasizes planning and learning that is relevant and applicable to the given situation.
Reasoning, problem solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions (reason). By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics. For difficult problems, algorithms can require enormous computational resources—most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical for problems of a certain size. The search for more efficient problem-solving algorithms is a high priority. Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI research was able to model. AI has progressed using "sub-symbolic" problem solving: embodied agent approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI mimic the human ability.
Knowledge representation
Main articles: Knowledge representation and Commonsense knowledge
An
ontology represents knowledge as a set of concepts within a domain and the
relationships between those concepts.
Knowledge representation and knowledge engineering are
central to AI research. Many of the problems
machines are expected to solve will require extensive knowledge about the
world. Among the things that AI needs to represent are: objects, properties,
categories and relations between objects; situations, events, states and
time; causes and effects; knowledge about knowledge (what we know
about what other people know); and many other, less well researched
domains. A representation of "what exists" is an ontology: the
set of objects, relations, concepts and so on that the machine knows about. The
most general are called upper ontologies, which attempt to provide a
foundation for all other knowledge. Among the
most difficult problems in knowledge representation are:
Default reasoning and
the qualification problem : Many of the things people know
take the form of
"working assumptions". For example, if a bird comes up in
conversation, people typically picture an animal that is fist sized, sings, and
flies. None of these things are true about all birds. John
McCarthy identified this problem in 1969 as the qualification problem:
for any commonsense rule that AI researchers care to represent, there tend to
be a huge number of exceptions. Almost nothing is simply true or false in the
way that abstract logic requires. AI research has explored a number of
solutions to this problem.
The breadth of
commonsense knowledge : The
number of atomic facts that the average person knows is very large. Research
projects that attempt to build a complete knowledge base of commonsense
knowledge (e.g., Cya) require enormous amounts of
laborious ontological engineering—they must be built, by hand, one
complicated concept at a time. A major goal is to have the computer
understand enough concepts to be able to learn by reading from sources like the
Internet, and thus be able to add to its own ontology.
The sub symbolic form
of some commonsense knowledge : Much of what people know is
not represented as "facts" or "statements" that they could
express verbally. For example, a chess master will avoid a particular chess
position because it "feels too exposed" or an art critic can take one
look at a statue and realize that it is a fake. These are intuitions or
tendencies that are represented in the brain non-consciously and
sub-symbolically. Knowledge like this informs, supports and provides a context
for symbolic, conscious knowledge. As with the related problem of sub-symbolic
reasoning, it is hoped that situated AI, computational intelligence,
or statistical AI will provide ways to represent this kind of
knowledge.
Planning
Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A representation of "what exists" is an ontology: the set of objects, relations, concepts and so on that the machine knows about. The most general are called upper ontologies, which attempt to provide a foundation for all other knowledge. Among the most difficult problems in knowledge representation are:
Default reasoning and the qualification problem : Many of the things people know
take the form of "working assumptions". For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969 as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.
The breadth of commonsense knowledge : The number of atomic facts that the average person knows is very large. Research projects that attempt to build a complete knowledge base of commonsense knowledge (e.g., Cya) require enormous amounts of laborious ontological engineering—they must be built, by hand, one complicated concept at a time. A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the Internet, and thus be able to add to its own ontology.
The sub symbolic form of some commonsense knowledge : Much of what people know is not represented as "facts" or "statements" that they could express verbally. For example, a chess master will avoid a particular chess position because it "feels too exposed" or an art critic can take one look at a statue and realize that it is a fake. These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically. Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide ways to represent this kind of knowledge.
Planning
A hierarchical control system is a form of control
system in which a set
of devices and governing software is arranged in a hierarchy. Intelligent agents must be able to set goals
and achieve them. They need a way to visualize the
future (they must have a representation of the state of the world and be able
to make predictions about how their actions will change it) and be able to make
choices that maximize the utility (or "value") of the
available choices. In classical planning problems, the agent can
assume that it is the only thing acting on the world and it can be certain what
the consequences of its actions may be. However, if the agent is not the only
actor, it must periodically ascertain whether the world matches its predictions
and it must change its plan as this becomes necessary, requiring the agent to
reason under uncertainty. Multi-agent planning uses the cooperation and competition of many agents to
achieve a given goal. Emergent
behavior such as this
is used by evolutionary algorithms and swarm intelligence.
Learning
Main article: Machine learning
Machine learning is the study of computer algorithms that improve automatically through experience and has been central to AI research since the field's inception. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.
Natural language processing
Main article: Natural language processing
Learning
Main article: Machine learning
Machine learning is the study of computer algorithms that improve automatically through experience and has been central to AI research since the field's inception. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space. These three types of learning can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Within developmental robotics, developmental learning approaches were elaborated for lifelong cumulative acquisition of repertoires of novel skills by a robot, through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.
Natural language processing
Main article: Natural language processing
A parse tree represents
the syntactic structure of a sentence according to some formal
grammar. Natural
language processing gives machines the ability to read
and understand the languages that humans speak. A sufficiently
powerful natural language processing system would enable natural language
user interfaces and the acquisition of knowledge directly from
human-written sources, such as newswire texts. Some straightforward
applications of natural language processing include information
retrieval, text mining, question answering and machine
translation. A common method of processing and extracting meaning from
natural language is through semantic indexing. Increases in processing speeds
and the drop in the cost of data storage makes indexing large volumes of
abstractions of the user's input much more efficient.
Perception
Main
articles: Machine perception, Computer
vision, and Speech recognition
Machine
perception is the ability to use input from sensors (such as cameras,
microphones, tactile sensors, sonar and others more exotic) to deduce
aspects of the world. Computer vision is the ability to analyze
visual input. A few selected sub-problems are speech recognition, facial
recognition and object recognition.
Motion and manipulation
Main
article: Robotics
The field of robotics is closely related to AI. Intelligence
is required for robots to be able to handle such tasks as object manipulation and navigation,
with sub-problems of localization (knowing
where you are, or finding out where other things are), mapping (learning what is around you, building
a map of the environment), and motion
planning (figuring out
how to get there) or path planning (going from one point in space to another
point, which may involve compliant motion – where the robot moves while
maintaining physical contact with an object).
Social intelligence
Main
article: Affective computing
A parse tree represents
the syntactic structure of a sentence according to some formal
grammar. Natural
language processing gives machines the ability to read
and understand the languages that humans speak. A sufficiently
powerful natural language processing system would enable natural language
user interfaces and the acquisition of knowledge directly from
human-written sources, such as newswire texts. Some straightforward
applications of natural language processing include information
retrieval, text mining, question answering and machine
translation. A common method of processing and extracting meaning from
natural language is through semantic indexing. Increases in processing speeds
and the drop in the cost of data storage makes indexing large volumes of
abstractions of the user's input much more efficient.
PerceptionMachine perception is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected sub-problems are speech recognition, facial recognition and object recognition.
Motion and manipulation
Main article: Robotics
The field of robotics is closely related to AI. Intelligence is required for robots to be able to handle such tasks as object manipulation and navigation, with sub-problems of localization (knowing where you are, or finding out where other things are), mapping (learning what is around you, building a map of the environment), and motion planning (figuring out how to get there) or path planning (going from one point in space to another point, which may involve compliant motion – where the robot moves while maintaining physical contact with an object).
Social intelligence
Main article: Affective computing
Kismet,
a robot with rudimentary social skills
Affective
computing is the study and development of systems and devices that can
recognize, interpret, process, and simulate human affects. It is an
interdisciplinary field spanning computer sciences, psychology, and cognitive
science. While the origins of the field may be traced as far back as to
early philosophical inquiries into emotion, the more modern branch of
computer science originated with Rosalind Picard's 1995 paper on
affective computing. A motivation for the research is the ability to simulate empathy.
The machine should interpret the emotional state of humans and adapt its
behavior to them, giving an appropriate response for those emotions. Emotion
and social skills play two roles for an intelligent agent. First, it must
be able to predict the actions of others, by understanding their motives and
emotional states. (This involves elements of game theory, decision
theory, as well as the ability to model human emotions and the perceptual
skills to detect emotions.) Also, in an effort to facilitate human-computer
interaction, an intelligent machine might want to be able
to display emotions—even if it does not actually experience them
itself—in order to appear sensitive to the emotional dynamics of human
interaction.
Creativity
Main
article: Computational creativity
A sub-field of AI
addresses creativity both theoretically (from a
philosophical and psychological perspective) and practically (via specific
implementations of systems that generate outputs that can be considered
creative, or systems that identify and assess creativity). Related areas of
computational research are Artificial intuition and Artificial thinking.
General intelligence
Main articles: Artificial general intelligence and AI-complete
Many researchers
think that their work will eventually be incorporated into a machine with artificial general intelligence,
combining all the skills above and exceeding human abilities at most or all of
them. A few believe that anthropomorphic features like artificial consciousness or an artificial
brain may be required
for such a project. Many of the problems
above may require general intelligence to be considered solved. For example,
even a straightforward, specific task like machine translation requires that the machine read and
write in both languages (NLP),
follow the author's argument (reason),
know what is being talked about (knowledge),
and faithfully reproduce the author's intention (social
intelligence). A problem like machine translation is considered "AI-complete".
In order to reach human-level performance for machines, one must solve all the
problems.
MORE HERE!!
So Guyz, Please Support Our community if love keep love on reading on different topics Every Single Day of the Week. To make that Possible help us Grow. Share this Website to Your friends and family. Do keep in touch with us to get to know about how things work and solve your all confusions and doubts!
Thank You All!





No comments:
Post a Comment