标签:href 训练 and ber parameter sample style problem coder
[ml.nature] 191209/Automated abnormality detection in lower extremity radiographs using deep learning
Nature 刊发了一篇用机器学习检测肌肉骨骼疾病的论文,该文探讨了预训练、数据集大小、模型体系结构对模型性能的影响。收集并公开了一个包含多个身体部位,93455 例下肢 X 光照片。研究结果表明,单个 CNN 模型可以有效地识别多个身体部位高度可变的 X 光照片中的各种异常。
[cs.CV] 191208/Individual predictions matter: Assessing the effect of data ordering in training fine-tuned CNNs formedical imaging
Because CheXNet fine-tunes a pre-trained DenseNet, the random seed affects the ordering of the batches of training data but not the initialized model weights. We found sub-stantial variability in predictions for the same radiograph across model runs (meanln(Pmax/Pmin)2.45, coefficient of variation 0.543). This individual radiograph-level variability was not fully reflected in the variability of AUC on a large test set.
[cs.LG] 191210/Winning the Lottery with Continuous Sparsification
The Lottery Ticket Hypothesis from Frankle & Carbin (2019) conjectures that, for typically-sized neural networks, it is possible to find small sub-networks which train faster and yield superior performance than their original counterparts. The proposed algorithm to search for "winning tickets", Iterative Magnitude Pruning, consistently finds sub-networks with 90?95% less parameters which train faster and better than the overparameterized models they were extracted from, creating potential applications to problems such as transfer learning.
Frankle&Carbin(2019)的彩票假说推测,通常规模的神经网络可以找到小型的子网络,这些子网络的训练速度比原始子网络更快,并且产生的性能更高。 提议的搜索“中奖彩票”的算法,迭代幅度修剪,可以一致地找到参数减少90-95%的子网,它们的训练速度比从其提取的参数化模型更快,更好,从而为迁移学习等问题创造了潜在的应用前景。 我们提出了“连续稀疏化”,一种用于搜索中奖票证的新算法,该算法可在训练过程中不断从网络中删除参数,并使用基于梯度的方法(而不是依赖于修剪策略)来学习子网的结构。 我们凭经验证明,我们的方法能够找到性能优于迭代幅度修剪技术的工单,同时以训练时期或挂钟时间为单位,可以提供更快的搜索速度。
[cs.LG] 191205/Why Should we Combine Training and Post-Training Methods for Out-of-Distribution Detection?
综述性文献:OOD(out-of-distribution) 检测算法。OOD 算法主要用于解决神经网络无法检测出与训练数据集分布不同的样本的问题。
[cs.LG] 191123/Outlier Exposure with Confidence Control for Out-of-Distribution Detection
Based on the Outlier Exposure (OE) technique, we propose a novel loss function,
[cs.LG] 191205/ Screening Data Points in Empirical Risk Minimization via Ellipsoidal Regions and Safe Loss Function
We design simple screening tests to automatically discard data samples in empirical risk minimization without losing optimization guarantees. We derive loss functions that produce dual objectives with a sparse solution...
[cs.LG] 181218/Jointly Learning Convolutional Representations to Compress Radiological Images and Classify Thoracic Diseases in the Compressed Domain
we introduce aconvolutional neural network (CNN) based classification approach which learns to reduce the resolution of the image using an autoen-coder and at the same time classify it using another network, while both the tasks are trained jointly. This algorithm guides the model to learn essential representations from high-resolution images forclassification along with reconstruction.
[stat.ML] 191205/Normalizing Flows for Probabilistic Modeling and Inference
[cs.osl] 191209/ChainerRL: A Deep Reinforcement Learning Library
[cs.osl] 191205/Neural Tangents: Fast and Easy Infinite Neural Networks in Python
Neural Tangents is a library designed to enable research into infinite-width neural networks. It provides a high-level API for specifying complex and hierarchical neural network architectures. These networks can then be trained and evaluated either at finite-width as usual or in their infinite-width limit. Infinite-width networks can be trained analytically using exact Bayesian inference or using gradient descent via the Neural Tangent Kernel. Additionally, Neural Tangents provides tools to study gradient descent training dynamics of wide but finite networks in either function space or weight space.
The entire library runs out-of-the-box on CPU, GPU, or TPU. All computations can be automatically distributed over multiple accelerators with near-linear scaling in the number of devices. Neural Tangents is available at this http URL. We also provide an accompanying interactive Colab notebook.
[Programming] code style
[blog] Approach pre-trained deep learning models with caution
标签:href 训练 and ber parameter sample style problem coder
原文地址:https://www.cnblogs.com/offduty/p/12048264.html