Neural Netowrk Basic Terminology

Ashish Pandey
1 min readAug 7, 2021

𝑳𝒆𝒕𝒔 𝒖𝒏𝒅𝒆𝒓𝒔𝒕𝒂𝒏𝒅 π’Žπ’†π’‚π’π’Šπ’π’ˆ 𝒐𝒇 π’”π’π’Žπ’† 𝐁𝐚𝐬𝐒𝐜 𝑡𝒆𝒖𝒓𝒂𝒍 π’π’†π’•π’˜π’π’“π’Œ π‘»π’†π’“π’Žπ’”-

π€πœπ­π’π―πšπ­π’π¨π§π¬ β€” Numbers that are calculated(both by Linear and non-linear layers).

𝐏𝐚𝐫𝐚𝐦𝐞𝐭𝐞𝐫𝐬 β€” Numbers that are randomly initialized, and optimized(that is, the numbers that define the Model.

π‘πžπ‹π” β€” Function that returns 0 for negative numbers and doesn’t change positive numbers.

𝐌𝐒𝐧𝐒-π›πšπ­πœπ‘ β€” A small group of inputs and labels gathered together in two arrays. A gradient descent step is updated on this batch (rather than a whole epoch).

π…π¨π«π°πšπ«π 𝐩𝐚𝐬𝐬 β€” Applying the model to some input and computing the predictions.

𝐋𝐨𝐬𝐬- A value that represents how well (or badly) our model is doing.

π†π«πšππ’πžπ§π­ β€” The derivative of the loss with respect to some parameter of the model.

𝐁𝐚𝐜𝐀𝐰𝐚𝐫𝐝 𝐩𝐚𝐬𝐬- Computing the gradients of the loss with respect to all model parameters.

π†π«πšππ’πžπ§π­-𝐝𝐞𝐬𝐜𝐞𝐧𝐭- Taking a step in the direction opposite to the gradients to make the model parameters a little bit better.

π‹πžπšπ«π§π’π§π  π‘πšπ­πž β€” The size of the step we take when applying SGD to update the parameters of the model.

--

--