项目作者: ashwinpn

项目描述 :
Concepts, Research Papers, Implementations - The works.
高级语言:
项目地址: git://github.com/ashwinpn/ML-Learnings.git
创建时间: 2020-02-28T01:05:15Z
项目社区:https://github.com/ashwinpn/ML-Learnings

开源协议:

下载


NOTES / IMPORTANT INSIGHTS

  1. 1] Exploding gradient(Solved by gradient clipping)
  2. 2] Dying ReLu : No learning if the activation is 0 (Solved by parametric relu)
  3. 3] Mean and variance of activations is not 0 and 1. (Partially solved by subtracting around 0.5 from activation. Better explained in fastai videos)

5-23-2020

data_url = “https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz
dmatrix_train_filename = “higgs_train.dmatrix”
dmatrix_test_filename = “higgs_test.dmatrix”
csv_filename = “HIGGS.csv.gz”
train_rows = 10500000
test_rows = 500000
num_round = 1000

plot = True

return xgboost dmatrix

def load_higgs():
if os.path.isfile(dmatrix_train_filename)
and os.path.isfile(dmatrix_test_filename):
dtrain = xgb.DMatrix(dmatrix_train_filename)
dtest = xgb.DMatrix(dmatrix_test_filename)
if dtrain.num_row() == train_rows and dtest.num_row() == test_rows:
print(“Loading cached dmatrix…”)
return dtrain, dtest

  1. if not os.path.isfile(csv_filename):
  2. print("Downloading higgs file...")
  3. urlretrieve(data_url, csv_filename)
  4. df_higgs_train = pandas.read_csv(csv_filename, dtype=np.float32,
  5. nrows=train_rows, header=None)
  6. dtrain = xgb.DMatrix(df_higgs_train.ix[:, 1:29], df_higgs_train[0])
  7. dtrain.save_binary(dmatrix_train_filename)
  8. df_higgs_test = pandas.read_csv(csv_filename, dtype=np.float32,
  9. skiprows=train_rows, nrows=test_rows,
  10. header=None)
  11. dtest = xgb.DMatrix(df_higgs_test.ix[:, 1:29], df_higgs_test[0])
  12. dtest.save_binary(dmatrix_test_filename)
  13. return dtrain, dtest

dtrain, dtest = load_higgs()
param = {}
param[‘objective’] = ‘binary:logitraw’
param[‘eval_metric’] = ‘error’
param[‘tree_method’] = ‘gpu_hist’
param[‘silent’] = 1

print(“Training with GPU …”)
tmp = time.time()
gpu_res = {}
xgb.train(param, dtrain, num_round, evals=[(dtest, “test”)],
evals_result=gpu_res)
gpu_time = time.time() - tmp
print(“GPU Training Time: %s seconds” % (str(gpu_time)))

print(“Training with CPU …”)
param[‘tree_method’] = ‘hist’
tmp = time.time()
cpu_res = {}
xgb.train(param, dtrain, num_round, evals=[(dtest, “test”)],
evals_result=cpu_res)
cpu_time = time.time() - tmp
print(“CPU Training Time: %s seconds” % (str(cpu_time)))

if plot:
import matplotlib.pyplot as plt
min_error = min(min(gpu_res[“test”][param[‘eval_metric’]]),
min(cpu_res[“test”][param[‘eval_metric’]]))
gpu_iteration_time =
[x / (num_round 1.0) gpu_time for x in range(0, num_round)]
cpu_iteration_time =
[x / (num_round 1.0) cpu_time for x in range(0, num_round)]
plt.plot(gpu_iteration_time, gpu_res[‘test’][param[‘eval_metric’]],
label=’Tesla P100’)
plt.plot(cpu_iteration_time, cpu_res[‘test’][param[‘eval_metric’]],
label=’2x Haswell E5-2698 v3 (32 cores)’)
plt.legend()
plt.xlabel(‘Time (s)’)
plt.ylabel(‘Test error’)
plt.axhline(y=min_error, color=’r’, linestyle=’dashed’)
plt.margins(x=0)
plt.ylim((0.23, 0.35))
plt.show()
```

5-22-2020

5-16-2020

4-22-2020

4-4-2020

  • BERT for NLP

3-21-2020

Basic recap : k-NN, Naive Bayes, SVM, Decision Forests, Data Mining, Clustering, and, Classification

3-20-2020

  1. Monte Carlo methods are ideal for sampling when we have elements which interact with each other - thus its applicability to physics problems.
  1. See https://covid19-dash.github.io/
  2. Check out https://upload.wikimedia.org/wikipedia/commons/8/86/Average_yearly_temperature_per_country.png
  3. Then see https://en.wikipedia.org/wiki/List_of_countries_by_median_age#/media/File:Median_age_by_country,_2016.svg

    3-19-2020

  • The Pitfalls of A/B testing
  1. Sequential testing leads to a considerable amount of errors while forming your conclusions - interactions between different elements needs to be taken into account too, while making data driven decisions.
  2. The testing should be allowed to run till the end - since we are analysing randomized samples, the test results halfway through and the test results at the end could be polar opposites of each other (!).
  3. “The smaller the improvement, the less reliable the results”.
  4. Need to retest it (at least a couple of times more). Even with a statistically significant result, there’s a quite large probability of false positive error.
  • Data Visualization Pitfalls
  1. https://junkcharts.typepad.com/
  1. Robust regression https://stats.idre.ucla.edu/r/dae/robust-regression/
  2. Least absolute deviation
  3. Iteratively weighted least squares

    3-18-2020

  1. What factors influence When and how to fine-tune? - size of the dataset, similarities with the original dataset.
  2. Pre-trained network weights provided : https://github.com/BVLC/caffe/wiki/Model-Zoo

3-17-2020

  1. Deep Learning can be, in simple words, put as taking a thought and refining it again and again, rather than deductive reasoning.
  2. Important questions regarding AI - How can we program machines to experience qualitative states of experiences - read as consciousness and self-awareness?
  3. Speech recognition is a very interesting and a complex problem, concisely described in the paper “Hidden Voice Commands”. Interestingly, it generated some sounds that a human would NEVER make (see AlphaGo).
  1. AlphaGo also played some moves that a human go player would never have been expected to have played = LEARNING.
  • NLP
  1. Check out wikification.

Flow GAN’s

Combining Maximum Likelihood and Adversarial Learning
Flow-GAN

variational Autoencoders

-Tutorial on Variational Autoencoders

  • Variational Autoencoders (VAEs) are powerful generative models.
  • They are one of the most popular approaches to unsupervised learning of complicated distributions.
  • VAE’s have already shown promise in generating many kinds of complicated data, including handwritten digits, faces, house numbers, CIFAR images, physical models of scenes, segmentation, and predicting the future from static images.

Read up on BERT

Bidirectional Encoder Representations from Transformers - NLP Pre-training.

xGBoost

Xtreme Gradient Boosting - has given some of the best results recently on problems involving structured data.

Gradient Boosting

  • Why does AdaBoost work so well?
  • Gradient Boosting is based on an ensemble based decision tree model, i.e. generating a strong classifier from hypotheses testing of combination of weak classifiers (decision stumps)

Miscellany

  • Keras on Theano optimizers - SAGA, Liblinear (log loss for high dimensional data), ADAM (incremental gradient descent)
  • ADAM is basically (RMSprop + momentum term)
  • You can add Nesterov Accelerated Gradient (NAG) to make it better

    Incorporating Nesterov Momentum into Adam

    NAG
  • Yet the ADAM optimizer in some cases performs poorly as compared to vanilla-SGD?
  • Does ReLU always provide a better non-linearity?

Reinforcement Learning

The agent learns from the environment and recives reward/penalties as the result of it’s actions. It’s objective is to devise policy function in order to maximize cumulative reward.
It’s diffrent from supervised and unsupervised learning.
It is based on Markov Decision Processes. But model-free paradigms such as Q-Learning perform better, especially on complex tasks.

  • Monte Carlo Policy Gradient (REINFORCE, actor-critic)
  • There are problems which arise with gradient values and variance, need to define a baseline and use Bellman’s equation
    Exploration (exploring new states) v/s Exploitation (maximize overall reward)
  • Normal Greedy Approach : Only focus on exploitation
  • Epsilon Greedy Approach : Focus on exploration (with probability 1 - epsilon) and exploitation.
  • Deep Q Networks (DQN)
    When the number of states / actions become too large, it is more efficient to use Neural Networks.

    In case of DQN, instead of a Bellman Update, we rewrite the Bellman Equation to emulate RMSE form, which woule become our cost function.
  • Policy Improvement Methods
  • Temporal Difference Methods

Transfer Learning

Use a model trained on one problem to do predictive modelling on another problem.
For instance, say you have a image classification task.
You can use the VGG Model shell, conveniently provided by Oxford at their Vector Graphics Group Website.
You definitely would need to change the last few layers based on your task, and other changes would require
hypotheses testing / domain knowledge.

Transfer learning really improves efficiency in the case where we need to perform supervised learning tasks, and we require
a significantly large, labeled dataset for tackling the problem successfully.

Visualization

  • Matplotlib is still popular in general
  • Can also use Pandas for visualization
  • Plotly.JS, D3.JS for beautiful outputs that could be rendered in Browsers
  • Bokeh is becoming popular of late; It has bindings in Python, Lua, Julia, Java, Scala.

Regularization Techniques

Regularization is used for reducing overfitting.

  • L1, L2 regularization : regularization over weights
  • ElasticNet - L1 + L2 regularization
  • Adversarial Learning - Problems faced : Some tasks which can be very easily performed by humans have been found to be very difficult for a computer. For example, if you introduce a little noise to the photo of a Lion , it may not be recognized as a Lion (or worse, not as an animal at all). Thus, you voluntarily introduce noise to the extended dataset to improve efficiency. This is called jittering.
  • Dropout - Eradicate some neural network nodes / layers to improve performance.
  • Tikhonov regularization / Ridge Regression - Regularization of ill posed problems

Probabilistic Graphical Models

  • Inferential Learning
  • Markov Random Fields
  • Conditional Random Fields
  • Bayesian Networks

Stochastic Gradient Descent

  • What is the ideal batch size?
  • Dealing with Vanishing Gradients (very small values of d/dw)

CNN’s

  • Pooling + Strides is used for downsampling of the feature map.
  • AlexNet, GoogLeNet, VGG, DenseNet.

Convergence

  • Vanilla-SGD achieves 1/t convergence over smoothing of a convex function
  • Nesterov Accelerated Gradient (NAG) achieves 1/t.t convergence over smoothing of a convex function
  • Newton Methods achieves 1/t.t.t convergence over smoothing of a convex function
  • Arora, Mianjy, et.al — Study convex relaxation based formulations of optimization problems

Expectation Maximization

  • Baum-Welch
  • Forward-Backward Algorithm