1. Described the Composition of ANNs?
ANNs are composed of multiple nodes, which imitate biological neurons of human brain. The neurons are connected by links and they interact with each other. The nodes can take input data and perform simple operations on the data. The result of these operations is passed to other neurons. The output at each node is called its activation or node value.
2. Explain Basic Structure of ANNs?
The idea of ANNs is based on the belief that working of human brain by making the right connections, can be imitated using silicon and wires as living neurons and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They are connected to other thousand cells by Axons. Stimuli from external environment or inputs from sensory organs are accepted by dendrites. These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue or does not send it forward.
a) Linear Functions
Explanation:
Neural networks are complex linear functions with many parameters.
b) It is powerful and easy neural network
a) (ii) and (iii) are true
Explanation:
Pattern recognition is what single layer neural networks are best at but they don't have the ability to find the parity of a picture or to determine whether two shapes are connected or not.
d) All of the mentioned
Explanation:
All mentioned are the characteristics of neural network.
a) All of the mentioned are true
Explanation:
Neural networks have higher computational rates than conventional computers because a lot of the operation is done in parallel. That is not the case when the neural network is simulated on a computer. The idea behind neural nets is based on the way the human brain works. Neural nets cannot be programmed, they cam only learn by examples.
d) All of the mentioned
Explanation:
Neural networks learn by example. They are more fault tolerant because they are always able to respond and small changes in input do not normally cause a change in output. Because of their parallel architecture, high computational rates are achieved.
c) (i) and (ii) are true
Explanation:
The training time depends on the size of the network; the number of neuron is greater and therefore the number of possible 'states' is increased. Neural networks can be simulated on a conventional computer but the main advantage of neural networks - parallel execution - is lost. Artificial neurons are not identical in operation to the biological ones.
d) All of the mentioned
a) 238
Explanation:
The output is found by multiplying the weights with their respective inputs, summing the results and multiplying with the transfer function. Therefore:
Output = 2 * (1*4 + 2*10 + 3*5 + 4*20) = 238.
b) a neural network that contains feedback
Explanation:
An auto-associative network is equivalent to a neural network that contains feedback. The number of feedback paths(loops) does not have to be one.
a) a single layer feed-forward neural network with pre-processing
Explanation:
The perceptron is a single layer feed-forward neural network. It is not an auto-associative network because it has no feedback and is not a multiple layer neural network because the pre-processing stage is not made of neurons.
c) 000 or 010 or 110 or 100
Explanation:
The truth table before generalization is:
Inputs Output
000 $
001 $
010 $
011 $
100 $
101 $
110 0
111 1
where $ represents don't know cases and the output is random.
After generalization, the truth table becomes:
Inputs Output
000 0
001 1
010 0
011 1
100 0
101 1
110 0
111 1
b) Heaviside function
Explanation:
Also known as the step function - so answer 1 is also right. It is a hard thresholding function, either on or off with no in-between.
c) Recurrent neural network
Explanation:
RNN (Recurrent neural network) topology involves backward links from output to the input and hidden layers.
c) True - perceptrons can do this but are unable to learn to do it - they have to be explicitly hand-coded
a) True
Explanation:
Yes the perceptron works like that.
b) Because they are the only class of problem that Perceptron can solve successfully
Explanation:
Linearly separable problems of interest of neural network researchers because they are the only class of problem that Perceptron can solve successfully
a) It can explain result
Explanation:
The artificial Neural Network (ANN) cannot explain result.
c) It is the transmission of error back through the network to allow weights to be adjusted so that the network can learn.
Explanation:
Back propagation is the transmission of error back through the network to allow weights to be adjusted so that the network can learn.
d) Because it is the simplest linearly inseparable problem that exists.
23. Explain Artificial Neural Networks (ANNs)?
A computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.
A neural network can be defined as a model of reasioning
based on the human brain.
The human brain incorporates nearly 10 billion neurons and
60 trillion connections,Synapses,between them.
By using multiple neurons simultaneously,the brain can
perform its functions much faster than the faster computers.
Although a single neuron has a very simple structure,an
army of such elements constitutes a tremendous processing
power.
The network which represents the connections among several
neurons is called a neural network.
26. What is Artificial Intelligence Neural Networks?
For the sake of trying to produce intelligent behavior however really all that's being done is work with artificial neural networks where each cell is a very simple processor and the goal is to try and make them work together to solve some problem. That's all that gets covered in this book. Many people are skeptical that artificial neural networks can produce human levels of performance because they are so much simpler than the biological neural networks.