Neural Networks
The structure of the neural network
A neuron can be a binary logistic regression unit
公式形式:
b: We can have an “always on” feature, which gives a class prior, or separate it out, as a bias term------b我們常常認為是偏置
A neural network = running several logistic regressions at the same time
單層神經網絡
我們輸入一個向量并且通過一系列的邏輯回歸函數,我們可以得到一個輸出向量,但是我們不需要提前決定,邏輯回歸試圖預測的向量是什么
多層神經網絡
我們可以通過另外一個logistic回歸函數,損失函數將指導中間變量是什么,為了更好的預測下一層的目標。
Matrix notation for a layer--矩陣表示
for example:
我們有:
總結:
f的運算:
非線性的f的必須
重要性:
沒有非線性的激活函數,深度神經網絡無法做比線性變換更復雜的運算
多個線性的神經網絡層相當于一個簡單的線性變換:$W_1W_2x=Wx$
如果采用更多的非線性激活函數,它們可以擬合更復雜的函數
命名主體識別Named Entity Recognition (NER)
The task: findand classifynames in text, for example:
Possible purposes:
Tracking mentions of particular entities in documents---跟蹤文檔中特殊的實體
For question answering, answers are usually named entities------回答一些關于命名主體識別的問題
A lot of wanted information is really associations between named entities---抽取命名主體之間關系
The same techniques can be extended to other slot-filling classifications----可以擴展到分類任務
Why might NER be hard?
實體的邊界很難計算
很難指導某個物體是否是一個實體
很難知道未知/新奇實體的類別
很難識別實體---當實體是模糊的,并且依賴于上下文
Binary word window classification ---小窗口上下文文本分類器
問題:
in general,很少對單個單詞進行分類
ambiguity arise in context,一詞多義的問題
example1:auto-antonyms
"To sanction" can mean "to permit" or "to punish”
"To seed" can mean "to place seeds" or "to remove seeds"
example2:resolving linking of ambiguous named entities
Paris ->Paris, France vs. Paris Hilton vs. Paris, Texas
Hathaway ->Berkshire Hathaway vs. Anne Hathaway
Window classification: Softmax
Idea: classify a word in its context window of neighboring words---在相鄰詞的上下文窗口對一個詞進行分類
A simple way to classify a word in context might be to average the word vectors in a window and to classify the average vector ---一個簡單方法是對上下文的所有詞向量去平均,但是這個方法會丟失位置信息
另一種方法: Train softmaxclassifier to classify a center word by taking concatenation of word vectors surrounding it in a window---將上下文所有單詞的詞向量串聯起來
for example: Classify “Paris” in the context of this sentence with window length 2
Resulting vector $x_{window}=xvarepsilon R^{5d}$一個列向量 然后通過softmax分類器
Binary classification with unnormalizedscores ---給分類的結果一個非標準化分數
之前的例子中:
假設我們需要確認一個中心詞是否是一個地點
和word2vec類似,我們遍歷語料庫的所有位置,但是這一次,它只能在一些地方得到高分
the positions that have an actual NER Location in their center are “true” positions and get a high score ---它們中符合標準的會獲得最高分
Neural Network Feed-forward Computation
采用神經網絡激活函數a簡單的給出一個非標準化分數
我們采用一個3層的神經網絡計算得分
s = score("museums in Paris are amazing”)
Main intuition for extra layer
中間層的作用學習的是輸入詞向量的非線性交互----Example: only if “museums”is first vector should it matter that “in”is in the second position
The max-margin loss
Idea for training objective: Make true window’s score larger and corrupt window’s score lower (until they’re good enough)---訓練思路,讓真實窗口的準確率提高,讓干擾窗口的得分降低
s = score(museums in Paris are amazing)
最小化
這是不可微分,但是是連續的,所以我們可以采用sgd算法進行優化
Each window with an NER location at its center should have a score +1 higher than any window without a location at its center -----每個中心有ner位置的窗口得分應該比中心沒有位置的窗口高1分
For full objective function: Sample several corrupt windows per true one. Sum over all training windows---使用類似于負采樣的方法,為真實窗口采樣幾個錯誤窗口
矩陣求導---不詳細推導
Example Jacobian: Elementwise activation Function
多元求導的例子
針對我們的公式 進行求導換算:
在上面式子中,我們通過鏈式法則可以得出
note: Neural Networks, Backpropagation
Neural Networks: Foundations
A neuron
A neuron is a generic computational unit that takes n inputs and produces a single output. What differentiates the outputs of different neurons is their parameters (also referred to as their weights). -----神經元作用 ? ? 通過圖形可視化上面公式: ?
我們可以看到,神經元是可以允許非線性在網絡中積累的次數之一
A single layer of neurons
我們將單個神經元的思想擴展到多個神經元,考慮輸入x作為多個這樣的神經元輸入
If we refer to the different neurons’ weights as ?and the biases as, we can say the respective activations are : ?
公式簡化 ? 我們可以縮放成: z=Wx+b ? 激活函數變化為: ?
feed-forward computation
首先我們考慮一個nlp中命名實體識別問題作為例子: "Museums in Paris are amazing" 我們想判別中心詞Paris是不是命名主體。在這種情況下,我們很可能不僅想要捕捉窗口中單詞向量,還想要捕捉單詞間的一些其他交互,方便我們分類。For instance, maybe it should matter that "Museums" is the ?rst word only if "in" is the second word. --上面可能存在順序的約束的問題。所以這樣的非線性決策通常不能被直接輸入softmax,而是需要一個中間層進行score。因此我們使用另一個矩陣與激活輸出計算得到的歸一化得分用于分類任務。 ? 這里的f代表的是激活函數 ?
Analysis of Dimensions: If we represent each word using a 4 dimensional word vector and we use a 5-word window as input (as in the above example), then the input . If we use 8 sigmoid units in the hidden layer and generate 1 score output from the activations, then
Maximum Margin Objective Function
我們采用:Maximum Margin Objective Function(最大間隔目標函數),使得保證對‘真’的數據的可能性比‘假’數據的可能性要高
定義符號:Using the previous example, if we call the score computed for the "true" labeled window "Museums in Paris are amazing" as S and the score computed for the "false" labeled window "Not all museums in Paris" as Sc (subscripted as c to signify that the window is "corrupt"). ---正確窗口S,錯誤窗口Sc
隨后,我們目標函數最大化S-Sc或者最小化Sc-S.然而,我們修改目標函數來保證誤差在Sc>S的時候在進行計算。這么做的目的是我們只在關心‘真’數據的得分高于‘假’數據的得分,其他不是很重要。因此,誤差Sc>S時候存在為Sc-S,反之不存在,即為0。所以我們的優化目標是:
However, the above optimization objective is risky in the sense that it does not attempt to create a margin of safety. We would want the "true" labeled data point to score higher than the "false" labeled data point by some positive margin ?. In other words, we would want error to be calculated if (s?sc< ?) and not just when (s?sc?< 0). Thus, we modify the optimization objective: ----上面的優化目標函數是存在風險的,它不能創造一個比較安全的間隔,所以我們希望存在一個這樣的間隔,并且這個間隔需要大于0 ?
We can scale this margin such that it is ? = 1 and let the other parameters in the optimization problem adapt to this without any change in performance.-----有希望了解的可以看svm的推導,這里的意思是說,我們把間隔設置為1,這樣我們可以讓其他參數在優化過程中自動進行調整,并不會影響模型的表現
梯度更新
方向傳播是一種利用鏈式法則來計算模型上任意參數的損失梯度的方法
1. ?is an input to the neural network. ---輸入 2. s is the output of the neural network.---輸出 3. ?Each layer (including the input and output layers) has neurons which receive an input and produce an output. The j-th neuron of layer k receives the scalar inputand produces the scalar activation output---每層中的神經元,上角標是層數,下角標的神經元位置 4. We will call the backpropagated error calculated atas. ---定義z的反向傳播誤差 5. Layer 1 refers to the input layer and not the ?rst hidden layer. ----輸入層不是隱藏層 6.is the transfer matrix that maps the output from the k-th layer to the input to the (k+1)-th -----轉移矩陣
定義:
開始反向傳播
現在開始反向傳播:假設損失函數 ?為正值, 我們想更新參數, ?我們看到 只參與了 ? ?和 ? ?的計算。這點對于理解反向傳搖是非常重要的一一反向傳搖的梯度只受它們所貢獻的值的影響。 在隨后的前向計算中和 ?相乘計算得分。我們可以從最大間隔損失看到: ?
我們只分析
其中, 指輸入層的輸入。我們可以看到梯度計算最后可以簡化為, ?其中 ?本質上是第 2 層中第 ?i ?個神經元反向傳播的誤差。? ?與 ?Wij ?相乘的結果, 輸入第 2 層中第 ?i ?個神經元中。
Training with Backpropagation – Vectorized
對更定的參數 , ?我們知道它的誤差梯度是 ,其中 ?? 是將 ? ?映射到 ? ?的矩 陣。因此我們可以確定整個矩陣 ? ?的梯度誤差為:
因此我們可以將整個矩陣形式的梯度寫為在矩陣中反向傳播的誤差向量和前向激活輸出的外積,并且對于誤差的估算: 其中代表矩陣中每個元素相乘
Neural Networks: Tips and Tricks
Gradient Check
Given a model with parameter vector θ and loss function J, the numerical gradient around θiis simply given by centered difference formula:
其實就是一個梯度的估計
Now, a natural question you might ask is, if this method is so precise, why do we not use it to compute all of our network gradients instead of applying back-propagation? The simple answer, as hinted earlier, is inef?ciency – recall that every time we want to compute the gradient with respect to an element, we need to make two forward passes through the network, which will be computationally expensive. Furthermore, many large-scale neural networks can contain millions of parameters, and computing two passes per parameter is clearly not optimal. -----雖然上面的梯度估計公式很有效,但是這僅僅是隨機檢測我們梯度是否正確的方法。我們最有效的并且最實用(運算效率最高的)算法就是反向傳播算法
Regularization ---正則化
As with many machine learning models, neural networks are highly prone to over?tting, where a model is able to obtain near perfect performance on the training dataset, but loses the ability to generalize to unseen data. ----和很多機器學習模型一樣,神經網絡也會陷入過擬合,這回讓它無法泛化到測試集上。一個常見的解決過擬合的問題就是采用L2正則化(只需要給損失函數J添加一個正則項),改進的損失函數
對上面公式的參數解釋,λ是一個超參數,控制正則項的權值大小,是第i層的權值矩陣的Froenius范數:Forenius范數=矩陣中每個元素的平方求和在開根號,
正則項的作用: what regularization is essentially doing is penalizing weights for being too large while optimizing over the original cost function---在優化損失函數的時候,懲罰數值太大的權值,讓權值分配更均勻,防止出現權值過大的情況
Due to the quadratic nature of the Frobenius norm (which computes the sum of the squared elements of a matrix), L2 regularization effectively reduces the ?exibility of the model and thereby reduces the over?tting phenomenon. Imposing such a constraint can also be interpreted as the prior Bayesian belief that the optimal weights are close to zero – how close depends on the value of λ.-----因為正則化有一個二次項的存在,這有利有弊,它會降低模型的靈活性但是也會降低過擬合的可能性。在貝葉斯學說的認為下,正則項可以優化權值并且使得其接近0,但是這個取決于你的一個λ值的大小
Too high a value of λ causes most of the weights to be set too close to 0, and the model does not learn anything meaningful from the training data, often obtaining poor accuracy on training, validation, and testing sets. ---λ取值要合適
為什么偏置沒有正則項
正則化的目的是為了防止過擬合,但是過擬合的表現形式是模型對于輸入的微小變化產生了巨大差異,這主要是因為W的原因,有些w的參數過大。但是b是不背鍋的,偏置b對于輸入的改變是不敏感的,不管輸入改變大還是小。
Dropout---部分參數拋棄運算
the idea is simple yet effective – during training, we will randomly “drop” with some probability (1?p) a subset of neurons during each forward/backward pass (or equivalently, we will keep alive each neuron with a probability p). Then, during testing, we will use the full network to compute our predictions. The result is that the network typically learns more meaningful information from the data, is less likely to over?t, and usually obtains higher performance overall on the task at hand. One intuitive reason why this technique should be so effective is that what dropout is doing is essentially doing is training exponentially many smaller networks at once and averaging over their predictions.---------dropout的思想就是在每次前向傳播或者反向傳播的時候我們按照一定的概率(1-P)凍結神經元,但是剩下概率為p的神經元是激活的,然后在測試階段,我們使用所有的神經元。使用dropout的網絡可以從數據中學到更多的知識
However, a key subtlety is that in order for dropout to work effectively, the expected output of a neuron during testing should be approximately the same as it was during training – else the magnitude of the outputs could be radically different, and the behavior of the network is no longer well-de?ned. Thus, we must typically divide the outputs of each neuron during testing by a certain value --------為了使得的dropout能夠有效,測試階段的神經元的預期輸出應該和訓練階段大致相同---否則輸出的大小存在很大差異,所以我們通常需要在測試階段將每個神經元的輸出除以P(P是存活神經元的概率)
Parameter Initialization--參數初始化
A key step towards achieving superlative performance with a neural network is initializing the parameters in a reasonable way. A good starting strategy is to initialize the weights to small random numbers normally distributed around 0 ---通常我們的權值隨機初始化在0附近 是W(fan-in)的輸入單元數,是W(fan-out)的輸出單元數
加餐:優化算法----此內容來源于datawhale'顏值擔當'
基于歷史的動態梯度優化算法結構
SGD
Momentum
NAG
AdaGrad
AdaDelta
Adam
Nadam
責任編輯:xj
原文標題:【CS224N筆記】一文詳解神經網絡來龍去脈
文章出處:【微信公眾號:深度學習自然語言處理】歡迎添加關注!文章轉載請注明出處。
-
神經網絡
+關注
關注
42文章
4764瀏覽量
100542 -
函數
+關注
關注
3文章
4307瀏覽量
62433 -
神經元
+關注
關注
1文章
363瀏覽量
18438
原文標題:【CS224N筆記】一文詳解神經網絡來龍去脈
文章出處:【微信號:zenRRan,微信公眾號:深度學習自然語言處理】歡迎添加關注!文章轉載請注明出處。
發布評論請先 登錄
相關推薦
評論