Neural network {{citations}} [[Image:Neural network example.png|thumb|right|Simplified view of an artificial neural network]] Traditionally, the term '''neural network''' had been used to refer to a network or circuit of [[neuron|biological neurons]]. The modern usage of the term often refers to [[artificial neural network]]s, which are composed of [[artificial neuron]]s or nodes. Thus the term has two distinct usages: # [[Biological neural network]]s are made up of real biological neurons that are connected or functionally-related in the [[peripheral nervous system]] or the [[central nervous system]]. In the field of [[neuroscience]], they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis. # [[Artificial neural network]]s are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. This article focuses on the relationship between the two concepts; for detailed coverage of the two different concepts refer to the separate articles: [[Biological neural network]] and [[Artificial neural network]]. ==Characterization== In general a biological neural network is composed of a group or groups of chemically connected or functionally associated neurons. A single neuron may be connected to many other neurons and the total number of neurons and connections in a network may be extensive. Connections, called [[synapses]], are usually formed from [[axons]] to [[dendrites]], though dendrodendritic microcircuitsArbib, p.666 and other connections are possible. Apart from the electrical signaling, there are other forms of signaling that arise from [[neurotransmitter]] diffusion, which have an effect on electrical signaling. As such, neural networks are extremely complex. [[Artificial intelligence]] and [[cognitive modeling]] try to simulate some properties of neural networks. While similar in their techniques, the former has the aim of solving particular tasks, while the latter aims to build mathematical models of biological neural systems. In the [[artificial intelligence]] field, artificial neural networks have been applied successfully to [[speech recognition]], [[image analysis]] and adaptive [[control]], in order to construct [[software agents]] (in [[Video game|computer and video games]]) or [[autonomous robot]]s. Most of the currently employed artificial neural networks for artificial intelligence are based on [[statistical estimation]], [[Optimization (mathematics)|optimization]] and [[control theory]]. The [[cognitive modelling]] field involves the physical or mathematical modeling of the behaviour of neural systems; ranging from the individual neural level (e.g. modelling the spike response curves of neurons to a stimulus), through the neural cluster level (e.g. modelling the release and effects of dopamine in the basal ganglia) to the complete organism (e.g. behavioural modelling of the organism's response to stimuli). ==The brain, neural networks and computers== Neural networks, as used in artificial intelligence, have traditionally been viewed as simplified models of neural processing in the brain, even though the relation between this model and brain biological architecture is debated. A subject of current research in theoretical neuroscience is the question surrounding the degree of complexity and the properties that individual neural elements should have to reproduce something resembling animal intelligence. Historically, computers evolved from the [[von Neumann architecture]], which is based on sequential processing and execution of explicit instructions. On the other hand, the origins of neural networks are based on efforts to model information processing in biological systems, which may rely largely on parallel processing as well as implicit instructions based on recognition of patterns of 'sensory' input from external sources. In other words, at its very heart a neural network is a complex statistical processor (as opposed to being tasked to sequentially process and execute). ==Neural networks and artificial intelligence== {{main|Artificial neural network}} An ''artificial neural network'' (ANN), also called a ''simulated neural network'' (SNN) or commonly just ''neural network'' (NN) is an interconnected group of [[artificial neuron]]s that uses a [[mathematical model|mathematical or computational model]] for [[information processing]] based on a [[connectionism|connectionistic]] approach to [[computation]]. In most cases an ANN is an [[adaptive system]] that changes its structure based on external or internal information that flows through the network. In more practical terms neural networks are [[non-linear]] [[statistical]] [[data modeling]] or [[decision making]] tools. They can be used to model complex relationships between inputs and outputs or to [[Pattern recognition|find patterns]] in data. ===Background=== An [[artificial neural network]] involves a network of simple processing elements ([[artificial neurons]]) which can exhibit complex global behaviour, determined by the connections between the processing elements and element parameters. One classical type of artificial neural network is the [[Hopfield net]]. In a neural network model simple [[Node (neural networks)|nodes]], which can be called variously "neurons", "neurodes", "Processing Elements" (PE) or "units", are connected together to form a network of nodes — hence the term "neural network". While a neural network does not have to be adaptive ''per se'', its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow. In modern [[Neural network software|software implementations]] of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems neural networks, or parts of neural networks (such as [[artificial neuron]]s) are used as components in larger systems that combine both adaptive and non-adaptive elements. The concept of a neural network appears to have first been proposed by [[Alan Turing]] in his 1948 paper "Intelligent Machinery". ===Applications=== The utility of artificial neural network models lies in the fact that they can be used to infer a function from observations and also to use it. This is particularly useful in applications where the complexity of the data or task makes the design of such a function by hand impractical. ====Real life applications==== The tasks to which artificial neural networks are applied tend to fall within the following broad categories: *[[Function approximation]], or [[regression analysis]], including [[time series prediction]] and modelling. *[[Statistical classification|Classification]], including [[Pattern recognition|pattern]] and sequence recognition, novelty detection and sequential decision making. *[[Data processing]], including filtering, clustering, [[blind signal separation]] and compression. Application areas include system identification and control (vehicle control, process control), game-playing and decision making (backgammon, chess, racing), pattern recognition (radar systems, face identification, object recognition, etc.), sequence recognition (gesture, speech, handwritten text recognition), medical diagnosis, financial applications, [[data mining]] (or knowledge discovery in databases, "KDD"), visualization and [[e-mail spam]] filtering. ===Neural network software=== ''Main article:'' [[Neural network software]] '''Neural network software''' is used to [[Simulation|simulate]], [[research]], [[Software development|develop]] and apply [[artificial neural network]]s, [[biological neural network]]s and in some cases a wider array of [[adaptive system]]s. ====Learning paradigms==== There are three major learning paradigms, each corresponding to a particular abstract learning task. These are [[supervised learning]], [[unsupervised learning]] and [[reinforcement learning]]. Usually any given type of network architecture can be employed in any of those tasks. ;Supervised learning In [[supervised learning]], we are given a set of example pairs (x, y), x \in X, y \in Y and the aim is to find a function f in the allowed class of functions that matches the examples. In other words, we wish to ''infer'' how the mapping implied by the data and the cost function is related to the mismatch between our mapping and the data. ;Unsupervised learning In [[unsupervised learning]] we are given some data x, and a cost function which is to be minimized which can be any function of x and the network's output, f. The cost function is determined by the task formulation. Most applications fall within the domain of [[estimation problems]] such as [[statistical modeling]], [[Data compression|compression]], [[Mail filter|filtering]], [[blind source separation]] and [[data clustering|clustering]]. ;Reinforcement learning In [[reinforcement learning]], data x is usually not given, but generated by an agent's interactions with the environment. At each point in time t, the agent performs an action y_t and the environment generates an observation x_t and an instantaneous cost c_t, according to some (usually unknown) dynamics. The aim is to discover a ''policy'' for selecting actions that minimises some measure of a long-term cost, i.e. the expected cumulative cost. The environment's dynamics and the long-term cost for each policy are usually unknown, but can be estimated. ANNs are frequently used in reinforcement learning as part of the overall algorithm. Tasks that fall within the paradigm of reinforcement learning are [[control]] problems, [[game]]s and other [[sequential decision making]] tasks. ====Learning algorithms==== There are many algorithms for training neural networks; most of them can be viewed as a straightforward application of [[Optimization (mathematics)|optimization]] theory and [[statistical estimation]]. [[Evolutionary computation]] methods, [[simulated annealing]], [[Expectation-Maximization|expectation maximization]] and [[non-parametric methods]] are among other commonly used methods for training neural networks. See also [[machine learning]]. Recent developments in this field also saw the use of [[particle swarm optimization]] and other [[swarm intelligence]] techniques used in the training of neural networks. ==Neural networks and neuroscience== Theoretical and [[computational neuroscience]] is the field concerned with the theoretical analysis and computational modeling of biological neural systems. Since neural systems are intimately related to cognitive processes and behaviour, the field is closely related to cognitive and behavioural modeling. The aim of the field is to create models of biological neural systems in order to understand how biological systems work. To gain this understanding, neuroscientists strive to make a link between observed biological processes (data), biologically plausible mechanisms for neural processing and learning ([[biological neural network]] models) and theory (statistical learning theory and [[information theory]]). === Types of models === Many models are used in the field, each defined at a different level of abstraction and trying to model different aspects of neural systems. They range from models of the short-term behaviour of [[biological neuron models|individual neurons]], through models of how the dynamics of neural circuitry arise from interactions between individual neurons, to models of how behaviour can arise from abstract neural modules that represent complete subsystems. These include models of the long-term and short-term plasticity of neural systems and its relation to learning and memory, from the individual neuron to the system level. ===Current research=== While initially research had been concerned mostly with the electrical characteristics of neurons, a particularly important part of the investigation in recent years has been the exploration of the role of [[neuromodulators]] such as [[dopamine]], [[acetylcholine]], and [[serotonin]] on behaviour and learning. [[Biophysics|Biophysical]] models, such as [[BCM theory]], have been important in understanding mechanisms for [[synaptic plasticity]], and have had applications in both computer science and neuroscience. Research is ongoing in understanding the computational algorithms used in the brain, with some recent biological evidence for [[radial basis networks]] and [[neural backpropagation]] as mechanisms for processing data. ==History of the neural network analogy== {{main|Connectionism}} The concept of neural networks started in the late-1800s as an effort to describe how the human mind performed. These ideas started being applied to computational models with the [[Perceptron]]. In early 1950s [[Friedrich Hayek]] was one of the first to posit the idea of [[spontaneous order]] {{Fact|date=May 2008}} in the brain arising out of decentralized networks of simple units (neurons). In the late 1940s, [[Donald Hebb]] made one of the first hypotheses for a mechanism of neural plasticity (i.e. learning), [[Hebbian learning]]. Hebbian learning is considered to be a 'typical' unsupervised learning rule and it (and variants of it) was an early model for [[long term potentiation]]. The [[Perceptron]] is essentially a linear classifier for classifying data x \in R^n specified by parameters w \in R^n, b \in R and an output function f = w'x + b. Its parameters are adapted with an ad-hoc rule similar to stochastic steepest gradient descent. Because the [[inner product]] is a [[linear operator]] in the input space, the Perceptron can only perfectly classify a set of data for which different classes are [[linearly separable]] in the input space, while it often fails completely for non-separable data. While the development of the algorithm initially generated some enthusiasm, partly because of its apparent relation to biological mechanisms, the later discovery of this inadequacy caused such models to be abandoned until the introduction of non-linear models into the field. The [[Cognitron]] (1975) was an early multilayered neural network with a training algorithm. The actual structure of the network and the methods used to set the interconnection weights change from one neural strategy to another, each with its advantages and disadvantages. Networks can propagate information in one direction only, or they can bounce back and forth until self-activation at a node occurs and the network settles on a final state. The ability for bi-directional flow of inputs between neurons/nodes was produced with the [[Hopfield net|Hopfield's network]] (1982), and specialization of these node layers for specific purposes was introduced through the first [[hybrid neural network|hybrid network]]. The [[connectionism|parallel distributed processing]] of the mid-1980s became popular under the name [[connectionism]]. The rediscovery of the [[backpropagation]] algorithm was probably the main reason behind the repopularisation of neural networks after the publication of "Learning Internal Representations by Error Propagation" in 1986 (Though backpropagation itself dates from 1974). The original network utilised multiple layers of weight-sum units of the type f = g(w'x + b), where g was a [[sigmoid function]] or [[logistic function]] such as used in [[logistic regression]]. Training was done by a form of stochastic steepest gradient descent. The employment of the chain rule of differentiation in deriving the appropriate parameter updates results in an algorithm that seems to 'backpropagate errors', hence the nomenclature. However it is essentially a form of gradient descent. Determining the optimal parameters in a model of this type is not trivial, and steepest gradient descent methods cannot be relied upon to give the solution without a good starting point. In recent times, networks with the same architecture as the backpropagation network are referred to as [[Multilayer perceptron|Multi-Layer Perceptrons]]. This name does not impose any limitations on the type of algorithm used for learning. The backpropagation network generated much enthusiasm at the time and there was much controversy about whether such learning could be implemented in the brain or not, partly because a mechanism for reverse signalling was not obvious at the time, but most importantly because there was no plausible source for the 'teaching' or 'target' signal. ==Criticism== [[A. K. Dewdney]], a former ''[[Scientific American]]'' columnist, wrote in 1997, ''“Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool.”'' (Dewdney, p.82) Arguments against Dewdney's position are that neural nets have been successfully used to solve many complex and diverse tasks, ranging from autonomously flying aircraft[http://www.nasa.gov/centers/dryden/news/NewsReleases/2003/03-49.html] to detecting credit card fraud[http://www.visa.ca/en/about/visabenefits/innovation.cfm]. Technology writer [[Roger Bridgman]] commented on Dewdney's statements about neural nets:
Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be "an opaque, unreadable table...valueless as a scientific resource". In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers. An unreadable table that a useful machine could read would still be well worth having.[http://members.fortunecity.com/templarseries/popper.html Roger Bridgman's defence of neural networks]
==See also==
*[[ADALINE]] *[[Artificial neural network]] *[[Biological cybernetics]] *[[Biologically-inspired computing]] *[[Cognitive architecture]] *[[Neural network software]] *[[Neuro-fuzzy]] *[[Parallel distributed processing]] *[[Predictive analytics]] *[[Radial basis function network]] *[[Simulated reality]] *[[Support vector machine]] *[[Tensor product network]] *[[20Q]] is a neural network implementation of the 20 questions game *[[Cultured neuronal networks]] *[[Neuroscience]] *[[Cognitive science]]
==References== {{reflist|2}} {{refbegin|2}} *{{cite book | author=Arbib, Michael A. (Ed.)| title=The Handbook of Brain Theory and Neural Networks | year = 1995}} * Alspector, {{US patent|4874963}} "''Neuromorphic learning networks''". October 17, 1989. *{{cite book | author=Agree, Philip E., et al.| title=Comparative Cognitive Robotics: Computation and Human Experience| publisher=Cambridge University Press | year=1997| id=ISBN 0-521-38603-9}}, p. 80 *{{cite book | author=Bar-Yam, Yaneer | title = [http://necsi.org/publications/dcs/Bar-YamChap2.pdf Dynamics of Complex Systems, Chapter 2] | year = 2003 |}} *{{cite book | author=Bar-Yam, Yaneer | title = [http://necsi.org/publications/dcs/Bar-YamChap3.pdf Dynamics of Complex Systems, Chapter 3] | year = 2003 |}} *{{cite book | author=Bar-Yam, Yaneer | title = [http://necsi.org/publications/mtw/ Making Things Work] | year = 2005 |}} See chapter 3. *{{cite book | author=Bertsekas, Dimitri P. | title = Nonlinear Programming | year = 1999}} *{{cite book | author=Bertsekas, Dimitri P. & Tsitsiklis, John N. | title = Neuro-dynamic Programming | year = 1996}} *{{cite journal | author=Bhadeshia H. K. D. H. | year=1992 | title=[http://www.msm.cam.ac.uk/phase-trans/abstracts/neural.review.pdf Neural Networks in Materials Science] | journal=ISIJ International | volume=39 |pages=966–979 | doi=10.2355/isijinternational.39.966}} *{{cite book | author=Boyd, Stephen & Vandenberghe, Lieven | title = [http://www.stanford.edu/~boyd/cvxbook/ Convex Optimization] | year = 2004}} *{{cite book | author=Dewdney, A. K.| title = Yes, We Have No Neutrons: An Eye-Opening Tour through the Twists and Turns of Bad Science| year = 1997 | publisher=Wiley, 192 pp}} See chapter 5. * {{cite journal | author=Egmont-Petersen, M., de Ridder, D., Handels, H. | year=2002 | title=Image processing with neural networks - a review | journal=Pattern Recognition | volume=35 | number=10 | pages=2279–2301 | doi=10.1016/S0031-3203(01)00178-9}} *{{cite journal | author=Fukushima, K. | year=1975 | title=Cognitron: A Self-Organizing Multilayered Neural Network | journal=Biological Cybernetics | volume=20 | pages=121–136 | doi=10.1007/BF00342633}} *{{cite journal | author=Frank, Michael J. | year=2005 | title=Dynamic Dopamine Modulation in the Basal Ganglia: A Neurocomputational Account of Cognitive Deficits in Medicated and Non-medicated Parkinsonism | journal=Journal of Cognitive Neuroscience | volume = 17 | pages=51–72 | doi=10.1162/0898929052880093}} *{{cite journal | author=Gardner, E.J., & Derrida, B. | year=1988 | title=Optimal storage properties of neural network models | journal=Journal of Physics a | volume=21 | pages=271–284 | doi=10.1088/0305-4470/21/1/031}} *{{cite journal | author=Krauth, W., & Mezard, M. | year=1989 | title=Storage capacity of memory with binary couplings | journal=Journal de Physique | volume=50 | pages=3057–3066 | doi=10.1051/jphys:0198900500200305700}} *{{cite journal | author=Maass, W., & Markram, H. | year=2002 | title=[http://www.igi.tugraz.at/maass/publications.html On the computational power of recurrent circuits of spiking neurons] | journal=Journal of Computer and System Sciences | volume=69(4) |pages=593–616}} *{{cite book | author=MacKay, David | title = [http://www.inference.phy.cam.ac.uk/mackay/itprnn/book.html Information Theory, Inference, and Learning Algorithms] | year = 2003}} *{{cite book | author=Mandic, D. & Chambers, J. | title=Recurrent Neural Networks for Prediction: Architectures, Learning algorithms and Stability | publisher=Wiley | year=2001}} *{{cite book | author=Minsky, M. & Papert, S. | title=An Introduction to Computational Geometry | publisher=MIT Press | year=1969}} *{{cite journal | author=Muller, P. & Insua, D.R. | year=1995 | title=Issues in Bayesian Analysis of Neural Network Models | journal=Neural Computation | volume=10 | pages=571–592}} *{{cite journal | author=Reilly, D.L., Cooper, L.N. & Elbaum, C. | year=1982 | title=A Neural Model for Category Learning | journal=Biological Cybernetics | volume=45 | pages=35–41 | doi=10.1007/BF00387211}} *{{cite book | author=Rosenblatt, F. | title=Principles of Neurodynamics | publisher=Spartan Books | year=1962}} *{{cite book | author = Sutton, Richard S. & Barto, Andrew G. | title = [http://www.cs.ualberta.ca/~sutton/book/the-book.html Reinforcement Learning : An introduction] | year = 1998}} *{{cite paper| author=Van den Bergh, F. Engelbrecht, AP |title=Cooperative Learning in Neural Networks using Particle Swarm Optimizers| publisher=CIRG 2000}} *{{cite journal | author=Wilkes, A.L. & Wade, N.J. | year=1997 | title=Bain on Neural Networks | journal=Brain and Cognition | volume=33 | pages=295–305 | doi=10.1006/brcg.1997.0869}} *{{cite book | author=Wasserman, P.D. | title=Neural computing theory and practice| publisher=Van Nostrand Reinhold | year=1989}} * Jeffrey T. Spooner, Manfredi Maggiore, Raul Ord onez, and Kevin M. Passino, Stable Adaptive Control and Estimation for Nonlinear Systems: Neural and Fuzzy Approximator Techniques, John Wiley and Sons, NY, 2002. *http://www.cs.stir.ac.uk/courses/31YF/Notes/Notes_PL.html *http://www.shef.ac.uk/psychology/gurney/notes/l1/section3_3.html *{{cite book| author=Peter Dayan, L.F. Abbott| title=Theoretical Neuroscience| publisher=MIT Press}} *{{cite book| author=Wulfram Gerstner, Werner Kistler| title=Spiking Neuron Models:Single Neurons, Populations, Plasticity| publisher=Cambridge University Press}} {{refend}} ==External links== {{externallinks}} * [http://www.msm.cam.ac.uk/phase-trans/abstracts/neural.review.html Review of Neural Networks in Materials Science] * [http://www.neuralnets.eu Neural Network and Artificial Intelligence] Vortal of Artificial Intelligence * [http://www.e-nns.org European Neural Network Society (ENNS)] * [http://www.inns.org International Neural Network Society (INNS)] * [http://www.ieee-cis.org IEEE Computational Intelligence Society (IEEE CIS)] * [http://www.gc.ssr.upm.es/inves/neural/ann1/anntutorial.html Artificial Neural Networks Tutorial in three languages (Univ. Politécnica de Madrid)] *[http://www.makhfi.com/tutorial/introduction.htm Introduction to Neural Networks and Knowledge Modeling] *[http://www.tandf.co.uk/journals/titles/0954898X.asp Network: Computation in Neural Systems] *[http://www.willamette.edu/~gorr/classes/cs449/intro.html Introduction to Artificial Neural Networks] *[http://www.hedengren.net/research/isat.htm In Situ Adaptive Tabulation:] A neural network alternative. *[http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.html Another introduction to ANN] *[http://www.avaye.com/files/articles/nnintro/nn_intro.pdf An introduction to Neural Networks] (pdf) *[http://www.obitko.com/tutorials/neural-network-prediction/ Prediction with neural networks] - includes Java applet for online experimenting with prediction of a function *[http://pl.youtube.com/watch?v=AyzOUbkUf3M Next Generation of Neural Networks] - Google Tech Talks *[http://pages.sbcglobal.net/louis.savain/AI/perceptual_network.htm Perceptual Learning] - Artificial Perceptual Neural Network used for machine learnig to play [[Chess]] *[http://www.softcomputing.es/en/home.php European Centre for Soft Computing] [[Category:Computational neuroscience]] [[Category:Data Mining]] [[Category:Neural networks]] [[Category:Network architecture]] [[Category:Networks]] [[Category:Information, knowledge, and uncertainty]] [[ar:الشبكة العصبيّة]] [[bg:Невронна мрежа]] [[de:Neuronales Netz]] [[es:Red neuronal artificial]] [[fr:Réseau de neurones]] [[ko:신경망]] [[he:רשת נוירונים]] [[hr:Neuronska mreža]] [[it:Rete neurale]] [[hu:Neurális hálózat]] [[nl:Neuraal netwerk]] [[ja:ニューラルネットワーク]] [[pl:Sieć neuronowa]] [[pt:Rede neural]] [[ro:Reţele neuronale]] [[ru:Нейронная сеть]] [[sk:Neurónová sieť]] [[sl:Nevronska mreža]] [[fi:Neuroverkot]] [[sv:Neurala nätverk]] [[vi:Mạng nơ-ron]] [[zh:神经网络]]