國慶佳節第一天,舉國同慶!出門旅游想必到處都是人山人海,不如在家里看看論文也是極好的!近日,機器學習頂會之一的ICLR2019投稿剛剛截止,本次大會投稿論文采用匿名公開的方式。本文整理了目前在國外社交網絡和知乎上討論熱烈的一些論文,一起來看看!
首先我們來看看ICLR 2018,也就是去年的提交論文題目分布情況。如下圖所示。熱門關鍵詞:強化學習、GAN、RIP等。
上圖為ICLR 2019提交論文的分布情況,熱門關鍵詞:強化學習、GAN、元學習等等。可以看出比去年還是有些變化的。
投稿論文地址:
https://openreview.net/group?id=ICLR.cc/2019/Conference
在GoogleColaboratory上可以找到關于ICLR 2019提交論文話題之間更加直觀的可視化圖。我們選擇了上圖中排名第三的話題“GAN”,圖中由紅色表示。可以看出,排名第三的GAN與表中多個話題有交集,如training、state、graph等。
討論熱度最高的論文TOP 5
1. LARGE SCALE GAN TRAINING FOR HIGH FIDELITY NATURAL IMAGE SYNTHESIS
最強GAN圖像生成器,真假難辨
論文地址:
https://openreview.net/pdf?id=B1xsqj09Fm
更多樣本地址:
https://drive.google.com/drive/folders/1lWC6XEPD0LT5KUnPXeve_kWeY-FxH002
第一篇就是這篇最佳BigGAN,DeepMind負責星際項目的Oriol Vinyals,說這篇論文帶來了史上最佳的GAN生成圖片,提升Inception Score 100分以上。
論文摘要:
盡管近期由于生成圖像建模的研究進展,從復雜數據集例如 ImageNet 中生成高分辨率、多樣性的樣本仍然是很大的挑戰。為此,研究者嘗試在最大規模的數據集中訓練生成對抗網絡,并研究在這種規模的訓練下的不穩定性。研究者發現應用垂直正則化(orthogonal regularization)到生成器可以使其服從簡單的「截斷技巧」(truncation trick),從而允許通過截斷隱空間來精調樣本保真度和多樣性的權衡。這種修改方法可以讓模型在類條件的圖像合成中達到當前最佳性能。當在 128x128 分辨率的 ImageNet 上訓練時,本文提出的模型—BigGAN—可以達到 166.3 的 Inception 分數(IS),以及 9.6 的 Frechet Inception 距離(FID),而之前的最佳 IS 和 FID 僅為 52.52 和 18.65。
BigGAN的生成器架構
生成樣例,真是惟妙惟肖
2.Recurrent Experience Replay in Distributed Reinforcement Learning
分布式強化學習中的循環經驗池
論文地址:
https://openreview.net/pdf?id=r1lyTjAqYX
Building on the recent successes of distributed training of RL agents, in this paper we investigate the training of RNN-based RL agents from experience replay. We investigate the effects of parameter lag resulting in representational drift and recurrent state staleness and empirically derive an improved training strategy. Using a single network architecture and fixed set of hyper-parameters, the resulting agent, Recurrent Replay Distributed DQN, triples the previous state of the art on Atari-57, and surpasses the state of the art on DMLab-30. R2D2 is the first agent to exceed human-level performance in 52 of the 57 Atari games.
3.Shallow Learning For Deep Networks
深度神經網絡的淺層學習
論文地址:
https://openreview.net/forum?id=r1Gsk3R9Fm
淺層監督的一層隱藏層神經網絡具有許多有利的特性,使它們比深層對應物更容易解釋,分析和優化,但缺乏表示能力。在這里,我們使用1-hiddenlayer學習問題逐層順序構建深層網絡,這可以從淺層網絡繼承屬性。與之前使用淺網絡的方法相反,我們關注的是深度學習被認為對成功至關重要的問題。因此,我們研究了兩個大規模圖像識別任務的CNN:ImageNet和CIFAR-10。使用一組簡單的架構和訓練想法,我們發現解決序列1隱藏層輔助問題導致CNN超過ImageNet上的AlexNet性能。通過解決2層和3層隱藏層輔助問題來擴展ourtraining方法以構建單個層,我們獲得了一個11層網絡,超過ImageNet上的VGG-11,獲得了89.8%的前5個單一作物。據我們所知,這是CNN的端到端培訓的第一個競爭性替代方案,可以擴展到ImageNet。我們進行了廣泛的實驗來研究它在中間層上引起的性質。
4.Relational Graph Attention Networks
關聯性圖注意力網絡
論文地址:
https://openreview.net/forum?id=Bklzkh0qFm¬eId=HJxMHja3Y7
論文摘要:
In this paper we present Relational Graph Attention Networks, an extension of Graph Attention Networks to incorporate both node features and relational information into a masked attention mechanism, extending graph-based attention methods to a wider variety of problems, specifically, predicting the properties of molecules. We demonstrate that our attention mechanism gives competitive results on a molecular toxicity classification task (Tox21), enhancing the performance of its spectral-based convolutional equivalent. We also investigate the model on a series of transductive knowledge base completion tasks, where its performance is noticeably weaker. We provide insights as to why this may be, and suggest when it is appropriate to incorporate an attention layer into a graph architecture.
5.A Solution to China Competitive Poker Using Deep Learning
斗地主深度學習算法
論文地址:
https://openreview.net/forum?id=rJzoujRct7
論文摘要:
Recently, deep neural networks have achieved superhuman performance in various games such as Go, chess and Shogi. Compared to Go, China Competitive Poker, also known as Dou dizhu, is a type of imperfect information game, including hidden information, randomness, multi-agent cooperation and competition. It has become widespread and is now a national game in China. We introduce an approach to play China Competitive Poker using Convolutional Neural Network (CNN) to predict actions. This network is trained by supervised learning from human game records. Without any search, the network already beats the best AI program by a large margin, and also beats the best human amateur players in duplicate mode.
其他有意思的論文:
ICLR 2019 有什么值得關注的亮點?- 周博磊的回答 - 知乎
https://www.zhihu.com/question/296404213/answer/500575759
問句開頭式:
Are adversarial examples inevitable?
Transfer Value or Policy? A Value-centric Framework Towards Transferrable Continuous Reinforcement Learning
How Important is a Neuron?
How Powerful are Graph Neural Networks?
Do Language Models Have Common Sense?
Is Wasserstein all you need?
哲理警句式:
Learning From the Experience of Others: Approximate Empirical Bayes in Neural Networks
In Your Pace: Learning the Right Example at the Right Time
Learning what you can do before doing anything
Like What You Like: Knowledge Distill via Neuron Selectivity Transfer
Don’s Settle for Average, Go for the Max: Fuzzy Sets and Max-Pooled Word Vectors
抖機靈式:
Look Ma, No GANs! Image Transformation with ModifAE
No Pressure! Addressing Problem of Local Minima in Manifold Learning
Backplay: 'Man muss immer umkehren'
Talk The Walk: Navigating Grids in New York City through Grounded Dialogue
Fatty and Skinny: A Joint Training Method of Watermark
A bird's eye view on coherence, and a worm's eye view on cohesion
Beyond Winning and Losing: Modeling Human Motivations and Behaviors with Vector-valued Inverse Reinforcement Learning
一句總結式:
ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness.
-
圖像
+關注
關注
2文章
1083瀏覽量
40418 -
GaN
+關注
關注
19文章
1921瀏覽量
73030 -
深度學習
+關注
關注
73文章
5493瀏覽量
120987
原文標題:ICLR 2019熱議論文Top 5:BigGAN、斗地主深度學習算法等
文章出處:【微信號:AI_era,微信公眾號:新智元】歡迎添加關注!文章轉載請注明出處。
發布評論請先 登錄
相關推薦
評論