Calculus
Note: Superscripts inside of parenthesis are used for indexing purposes, not powers.
Cost function, where is the activation function of some layer , less the expected output , squared:
Activation function for a given neuron, where is the weight of the neuron in the present layer, is the result of the previous layer’s neuron activation function, and is the bias of the current layer’s neuron:
It is often easier to write the above expression as, where is some non-linear function (like sigmoid or RELU):
The following equivalency demonstrates how the weight influences the cost of a given neuron:
To average the ratio of the weight to the cost of all neurons, take the product of the sum of all partial derivatives and , where is the number of neurons:
We can apply the same formula to find the sensitivity of the cost to the bias :
And for the previous activation function:
8:02
Data Sources
- https://physionet.org/about/database/
- https://archive.ics.uci.edu/dataset/336/chronic+kidney+disease
- https://futurebloodtesting.org/open-datasets/
- https://github.com/irinagain/Awesome-CGM?tab=readme-ov-file
- https://catalog.data.gov/dataset?q=&sort=views_recent+desc
- https://libguides.brown.edu/c.php?g=545426&p=3741199
Resources
- https://mathoverflow.net/questions/25983/intuitive-crutches-for-higher-dimensional-thinking
- http://neuralnetworksanddeeplearning.com/chap1.html
- https://towardsdatascience.com/introduction-to-neural-networks-advantages-and-applications-96851bd1a207
- https://developers.google.com/machine-learning/crash-course
- https://rickwierenga.com/blog/ml-fundamentals
- https://www.kaggle.com/code/onurderya/time-series-forecasting-with-ann-lstm
- https://youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi&si=y-O2YSLIXRSt6Emj