If you're not sure which algorithm to use for your propensity model you should start with a logistic regression and try the following:
- Logistic regression. The Logistic Regression algorithm is very efficient and is an easy way to compute propensities quickly on any dataset. It is a great place to start. However, it tends to be less accurate with datasets that have a large number of variables, complex / non-linear relationships, or collinear variables. In that case you should try other algorithms such as the Random Forest algorithm.
- Random forest. When in doubt use the Random Forest algorithm if you've already tried the Logistic Regression. Random Forest is a robust algorithm that does well on a variety of tabular datasets. It tends to be somewhat slower but will do a great job classifying most datasets.
- XGBoost. The XGBoost classifier algorithm performs similarly to Random Forest. In general XGBoost will be less robust in terms of overfitting data and/or handling messy data but it will do better when the dataset is unbalanced, i.e. when the outcome you are trying to predict is infrequent.
- Gradient boosting classifier. In most cases the Gradient Boosting classifier will not perform as well as Random Forest. It is more sensitive to overfitting and slower than the latter. However, it occasionally does better in cases where Random Forest may be biased or limited, e.g., with categorical variables with many levels.
- Adaptive boosting. The AdaBoost (adaptive boosting) classifier will generally not perform as well as Random Forest or XGBoost. It will be both slower and more sensitive to noise. However, it can occasionally perform better with high-quality datasets when overfitting is a concern.
- Extra trees. The Extra Trees (extremely randomized trees) classifier is similar to the Random Forest method. It tends to be significantly faster but will not do as well as Random Forest with noisy datasets and a large number of variables.