deep feedforward networks
E25122
Deep feedforward networks are a class of neural network architectures in which information flows in one direction through multiple layers to learn complex input–output mappings without recurrent connections.
Statements (49)
| Predicate | Object |
|---|---|
| instanceOf |
artificial neural network architecture
→
deep learning model → |
| canUseActivationFunction |
ReLU
→
leaky ReLU → sigmoid → softmax in output layer for classification → tanh → |
| canUseLossFunction |
cross-entropy loss
→
mean squared error → |
| canUseOptimizer |
Adam
→
RMSProp → SGD → |
| differsFrom |
convolutional neural networks
→
recurrent neural networks → |
| hasAlternativeName |
deep MLPs
→
deep feedforward neural networks → deep multilayer perceptrons → |
| hasComponent |
input layer
→
one or more hidden layers → output layer → |
| hasKeyProperty |
composed of layers of units with learnable weights
→
information flows in one direction → learn complex input–output mappings → multiple hidden layers → no recurrent connections → |
| hasProperty |
depth enables hierarchical feature learning
→
differentiable with respect to parameters → feedforward computation from inputs to outputs → parameters organized in layers → universal function approximator under mild conditions → |
| introducedInContextOf |
deep learning
→
|
| isSubclassOf |
feedforward neural networks
→
|
| mayUse |
batch normalization
→
residual connections → |
| regularizedBy |
dropout
→
early stopping → weight decay → |
| requires |
labeled training data for supervised tasks
→
|
| trainedBy |
supervised learning
→
|
| trainedWith |
backpropagation
→
gradient descent → mini-batch gradient descent → stochastic gradient descent → |
| usedFor |
classification
→
function approximation → pattern recognition → regression → representation learning → |
| uses |
nonlinear activation functions
→
|
Referenced by (1)
| Subject (surface form when different) | Predicate |
|---|---|
|
Deep Learning (book)
→
|
subject |