The Log-Rank test begins by calculating a statistic representing the sum of weighted differences between dij/ Yij and di/ Yi at each event time ti for each group j=1 through k. For the Log-Rank test, the weights applied to these differences are all equal to 1, so each event time has an equal weighting on the value of the statistics. The statistics calculated for the k groups are linearly dependent, and therefore only (k-1) may be used to calculate a test statistic. To calculate the test statistic, (k-1) of the statistics are formed into a vector called Z. The variances and covariances for these (k-1) statistics are placed into a variance covariance matrix called Σ. A test statistic isthen calculated as:

x2=Z(Σ-1)Zt

This has a chi-squared distribution with (k-1) degrees of freedom when the null hypothesis is true.

The Log-Rank test allows for two types of non perfect survival data, left truncated data and right censored data. If censored observations are not present in the data then Wilcoxon Rank sum test is more appropriate.

II. Application of the Log-Rank Test

The LIFETEST procedure in SAS can be used to generate the Log-Rank test for comparison of survival patterns across different groups.

Continuing with the credit card example as before, once a few categorical variables have been shortlisted through either business intuition or statistical techniques as potential candidates for segmenting the population, a Log-Rank test will be applied to each of them to test whether the survivor/hazard functions are different across the categories of that variable. The variable having the highest Chi- Square will be used to create the first split of the population.

Before even going ahead with the segmentation methodology, it is advised to summarize the data across the shortlisted variables for ease of computation. Assuming that there are 4 shortlisted variables, MOB, DELQ, UTIL and BAL, this can be achieved in SAS through a simple SQL procedure. Continuous variables like utilization and balance need to be converted to categorical variables– UTIL_ FMT and BAL_FMT.

*SUMMARIZING THE DATA;

PROC SQL;

CREATE TABLE SUMMARY1 AS

SELECT EVENT_DURATION,UTIL_

FMT,DELQ,MOB,BAL_FMT,FULL_PAY_IND,T_PD,

COUNT(*) AS NUMBER,

SUM(T_PD = 1) AS DEFAULTS

FROM STACKED_DATA

GROUP BY 1,2,3,4,5,6,7 ;

END;

Next, Log-Rank test will be computed iteratively for each of the four selected variables by specifying them in the STRATA option of PROC LIFETEST. A separate survivor function is then estimated for each stratum, and tests of the homogeneity of strata are performed. The precise SAS code is as follows:

*LIFETEST FOR EACH VARIABLE;

ODS OUTPUT SURVDIFF = SD HOMTESTS = HT;

PROC LIFETEST DATA = SUMMARY1 METHOD = LT

INTERVALS = 0 TO 108 BY 2;

TIME EVENT_DURATION*T_PD(0);

STRATA VAR_NAME/ADJUST = TUKEY;

FREQ NUMBER;

RUN;

It is essential to configure some options of the LIFETEST procedure before executing it:

• In  the TIME statement, the survival time variable, EVENT_DURATION, is crossed with the censoring variable, T_PD, with the value 0 indicating censoring. Hence the values of EVENT_DURATION are considered censored if the corresponding values of T_PD are 0. Otherwise, they are considered as event times.

  • In the STRATA statement, the variablename is specified, which indicates that the data are to be divided into strata based on the values of that particular variable. In this example, a separate PROC LIFETEST is run for each of the five shortlisted variables - UTIL_FMT, DELQ, MOB, BAL_FMT and FULL_PAY_IND.
  • The METHOD option specifies the method to be used to compute the survival function estimates. LT refers to the life table (actuarial estimates). This method is preferred when the number of observations is large2 .
  • The INTERVALS option specifies interval endpoints for life-table estimates. Each interval contains its lower endpoint but does not contain its upper endpoint. Hence the specification in the above code produces the set of intervals

{[0,2), [2,4), ...............[106,108), {108, ∞)}

  • The FREQ statement is useful for producing life tables when the data are already in the form of a summary data set. The FREQ statement identifies a variable (NUMBER in this case) that contains the frequency of occurrence of each observation. PROC LIFETEST treats each observation as if it appeared n times, where n is the value of the FREQ variable for the observation.

Once the LIFETEST procedure is run for each of the shortlisted variables and the output datasets are stored in the dataset named HT, they are appended together to create a final table having the Chi-Square test results for each variable.

DATA HT1;

SET HT;

LENGTH VAR $30.;

VAR = “VAR_SEG.”;

WHERE TEST = “Log-Rank”;

RUN;

The Chi-Square test results for each variable are then appended to create a table like the below. The table is sorted in order of descending Chi-Square values. The top variable, DELQ, is used as the first segmentation split.

Now, DELQ has 4 categories: cycle 0, 1, 2 and 3.

Since cycle 0 comprises of around 95% of the non-default population, the data is further divided into two categories: Inorder (cycle 0) and Delinquent (cycle 1+). Since the Delinquent population is relatively small, it is not split further. The Inorder population is then considered, and the segmentation exercise is carried on this subset, using the remaining variables.

The InOrder population is further split into Full Payer and Revolver population according to the top splitter in this subset, FULL_PAY_IND. Each of this subset can further be split using the remaining variables, following the same steps as before. It should be kept in mind that any of the final nodes post segmentation should have sufficient volume for the model to be robust. In this example, the final segmentation structure is obtained by further splitting each of the Full Payer and Revolver population by MOB.

D. Analysis OF Segmentation Performance

The blue highlighted boxes in Figure 1 are the final segments for this population. Once the final segmentation structure has been decided, it makes sense to check the survival distributions across the five segments. The following program can be used for that, assuming that the variable “pd_seg_ind_sm_2” captures the new segmentation structure:

ods output SurvDiff = SD HomTests = HT ;

proc lifetest data= method=lt

intervals= 0 to 108 by 2 plots = (s,h);

time event_duration * t_pd(0) ;

Strata pd_seg_ind_sm_2/Adjust = Tukey;

run;

The ADJUST option (new in SAS 9.2) tells PROC LIFETEST to produce p-values for all ten pairwise comparisons of the five strata and then to report p-values that have been adjusted for multiple comparisons using the Tukey’s method. Results are shown in Table 4.

Table 4 shows the overall chi-square tests of the null hypothesis that the survivor functions are identical across the five segments. All three tests are highly significant, unanimously rejecting the null hypothesis and providing evidence that at least one of the five stratum hazard plots is significantly different from others for some value of t≤ τ

The second output in Table 5 shows the Log Rank tests comparing each possible pair of strata. All the tests are significant both using the raw p-values and after the Tukey adjustment, suggesting that each segment is significantly different from another. This rules out the possibility to collapse the segments.

The graph in Figure 2 shows some evidence of difference in survival functions across the five strata, thereby supporting the results already obtained from the Chi-Square tests. Finally, since the default rates of each of the five segments at two time intervals, 12 month and 60 months, are considerably different across the segments, this indicates that the segments are different in terms of the default rates.

Conclusion and Limitations

This approach also has its own set of limitations:

First, the test statistics for the Log-Rank test are based on large-sample approximations and gives good results when the sample size is large. The number of comparison segments should not be allowed to get too large to avoid having segments with too few subjects. Each group should contain at least 30 subjects, preferably more for the best results.

Secondly, the Log-Rank test is more powerful for detecting differences of the form S1 (t) = [ S2 (t)]Ƴ, where Ƴ is some positive number other than 1.0. This equation defines a proportional hazards model, and the log rank test is not particularly good at detecting differences when survival curves cross.

Segmentation is a unique aspect in modelling in that it blends art and science in almost equal measures. There are times when a segmentation structure based entirely on statistical measures does not add enough value; however, it will be effective only when these numbers are coupled with business requirements and common sense as was demonstrated in the example discussed above. Dividing the population into five different groups and building separate survival models on each of these groups yielded better results instead of building a single standalone model on the entire population, as these groups are inherently different in terms of the survival patterns.

Survival analysis can be applied to build models for time of default on credit cards. This knowledge helps the issuer to preempt the attrition and devise customer engagement strategies. We here proposed to create an intuitive segmentation structure on a large dataset of credit card accounts before the onset of the actual modelling exercise by using the Log-Rank test to compare the hazard function across the different sub groups. The program used in this paper serves as a fast, efficient way to churn through a large quantity of data to provide the client the necessary information needed for a final decision on modelling splits.

References

  1. Allison, P. D. (2010). Survival Analysis using SAS®: A Practical Guide, Second Edition. Cary,NC: SAS Institute Inc.
  2. Bellotti, T., & Crook, J. (2007, May 7). Credit Scoring With Macroeconomic Variables Using Survival Analysis.
  3. Man, R. (2014, May 9). Survival analysis in credit scoring: A framework for PD estimation.
  4. Pazdera, J., Rychnovsky, M., & Zahradnik, P. (2009, Feb 1). Survival analysis in credit scoring.
  5. Sayles, H., & Soulakova, J. (n.d.). Log-Rank Test for More tan Two Groups.
  6. Weldon, G., & Zidun, H. (n.d.). Segmentation of Data Prior to Modeling. Atlanta: Merkle,Inc.

End Notes

  1. Refer to (Bellotti & Crook, 2007), (Pazdera, Rychnovsky, & Zahradnik, 2009), (Man, 2014).
  2. The Kaplan – Meier method of estimating survivor functions is more suitable when sample size is small, and event times are measured with precision. This is in fact the default method in PROC LIFETEST.

Contact US