Friday, August 15, 2014

When to Break Golden Handcuffs: The Endowment Equation

Recently, a few of my friends have been going through something of an existential crisis. It's only natural as they're all just entering the unstructured real world-- I myself went through a mini-nihilism phase after graduation. One cure, I've found, is to set a lifelong goal, any goal, and work tirelessly to achieve it-- anything from becoming an olympic athlete to helping as many people as possible. A nihilist would say all goals are equally arbitrary-- but having one in spite of that fact is a sure-fire way to get your sense of purpose back in my book.

Naturally, the first step many people, including a lot of my friends, choose to take to achieve their goal is to acquire enough capital such that they don't have to worry about their finances anymore. Note that seeking early retirement isn't a goal everyone chooses to pursue and, indeed, it's not something I care about (I'm probably never retiring). But given that the initial goal of accruing "retirement money" is so common, I thought I'd write a little blurb to help make the abstract goal of retiring a little more concrete. The goal of this article is to help you compute exactly how much money you need to have in the bank before you can dedicate your life to a potentially revenue-negative venture (like sailing around the world or something like that).

I started wondering how much money one would need to have as an "endowment" before one can live off the interest indefinitely without having to worry about working anymore. There's the "4% rule," which states that one can spend 4% of an endowment every year indefinitely, without running out of money ever, but I wanted a more exact answer.

I frame the problem as follows:
  • Start with an endowment of $Y
  • You pick an amount $X that you want to "make" every year as your "income" off the endowment.
  • How big does Y have to be so that you can sustain making $X adjusted for inflation every year, forever, while having your original endowment, Y, also keep up with inflation?
The equation looks as follows:
where we have:
  • n = the number of years after "retirement" (ie the number of years you've been living off your endowment for)
  • r_{inflation} = percent inflation every year (3% if you want to be conservative)
  • r_{s&p} = percent return on the s&p500 (6% if you want to be conservative)
  • (r_{s&p} - r_{inflation}) = excess returns on inflation
  • Y = your initial endowment
  • X = the amount of money you want to make as income off your endowment (eg 200k)
What this equation says is that after living off your endowment for n years, your excess returns on inflation (left-hand side) should be equal to your desired inflation-adjusted income (right-hand side).

Solving this equation, and adding X to account for your income the first year, yields the following simple relation:

What does this equation tell you? Well, effectively, it tells you how much money you need to have in the bank before you can comfortably live off the interest, taking inflation into account. Once you get to this point, you can have some fun fiddling around with the numbers.

Assuming inflation is 3% and the s&p returns 6% annually (both conservative numbers), here's how big your endowment needs to be to earn various levels of income without doing jack sh**:
  • 100k/yr => endowment of ~3.54 MM
  • 200k/yr => endowment of ~7.1   MM
  • 500k/yr => endowment of ~17.7 MM
  • 1M / yr  => endowment of ~35.4 MM
It's linear in the desired yearly income-- but it's still nice to see the numbers written down. At the end of the day, having 7.1MM in the bank makes you pretty much set for life.

Here's a plot you can check out that shows the endowment size against the desired annual income with mathematica code for you to play with if you so desire:
Inline image 1

f[inflation_, sandpreturn_, X_] := X * (1 + (1 + inflation)/(sandpreturn - inflation))

Plot[f[.03, .06, x*1000]/1000000, {x, 0, 1000}, Frame -> True, FrameLabel -> { "Desired Annual Income (in thousands of dollars)", "Required Endowment (in $MM)"}]

f[.03, .06, 1000000]


LaTeX:
Y ( 1 + r_{inflation} )^n (r_{s\&p} - r_{inflation}) = X (1 + r_{inflation})^{n+1}
Y = X \left(1 + \frac{1 + r_{inflation}}{r_{s\&p } - r_{inflation}}\right)


So, in summary, if your goal is to retire as soon as possible, simply figure out the annual income you'd like to have in retirement and plug it into the formula to compute the amount of capital you'll need to have in order to retire securely. As I said, racing to retirement isn't for everyone-- but if it's for you, hopefully this post helps put a number in front of you.

Afterthought: Actually, the endowment from this equation is an overestimate, since when you die you still have the whole endowment intact. If instead you want the endowment to go to zero around when you die, I think you can save significantly less money. Maybe I'll compute this case in the future.


Sunday, November 10, 2013

Flawed Logic Everyone Uses to Support Buying a Home

The Claim

Some people equate renting a home to "throwing away money each month" when compared to buying a home. That is, they think that if they buy a home, they "invest" the rent in the home in the form of a mortgage payment each month vs. "throwing it away" on rent paid to someone else if they don't own the home themselves. They make the claim: "buying a home is better than renting a home because renters throw away money each month whereas buyers invest the money in the home itself, and therefore don't lose anything other than the interest on their mortgage payments." This is horribly flawed logic.

It recently occurred to me that a lot of people might think this and so I wanted to quickly write a blurb about exactly why this logic doesn't make any sense. To be clear: the reason why the claim is wrong isn't subjective or qualitative; the claim is straight-up wrong because of basic, simple, although unintuitive, economic principles. It's wrong because of what you learn in econ 101, so if you every thought the choice between buying/renting was obvious, you should really read and understand what I've written below.

Opportunity Cost (Definition and Explanation)

In order to understand why the claim is wrong, we first need to understand the concept of opportunity cost. Explanations of opportunity cost are all over the internet but I'll explain what it is here for completeness.

Simply put, opportunity cost is the cost of not doing something. To understand how it works, we'll go through a short example.

Suppose I charge you $100 to leave work an hour early on Friday. Then, the cost of leaving work is obviously $100. That is, if you leave work early, you will lose a hundred dollars. What's tough for people to understand is that not taking money is the same as giving money away. To see that these two things really are the same, we can go back and modify our previous example slightly. Now, instead of charging you $100 straight-up to leave work early, I take $100 from you (before-hand, against your will so it's not a part of your decision) and offer to give it back to you if to stay that extra hour. These two situations are actually identical economically and the key question you have to ask to see this is "what's the cost of leaving work?" in each case. The cost of leaving work is the difference between how much money you have if you leave and how much money you have if you stay. In the first situation, you have -$100 if you leave work and $0 if you stay so the cost of leaving work is: -$100 ($ you have if you leave) - ($0) ($  you have if you stay) = -$100. But in the second situation, you again have -$100 if you leave (because I took it before-hand) and $0 if you stay. These payoffs are identical to the first situation and so the cost again is -$100 even though I'm taking based on your decision in the first case and offering based on your decision in the second case. We call the $100 charge a "cost" in the first case and an "opportunity cost" in the second case by definition because you "lose" $100 in the first case (cost) but you "choose not to take" (opportunity cost) the $100 offered to you in the second case.

Opportunity cost will be important for the rest of the explanation so be sure to check out Wikipedia if my example still doesn't really make sense.

Why the Claim is Wrong

Alright, so now that we understand opportunity cost, we can understand why renters aren't really "throwing away money" compared to buyers. In order to see this, we have to factor in not only the mortgage payments buyers make but the opportunity cost they incur by not renting out their home. The best way to do this is with an example.

Let's supposed you've saved up $1MM (lucky you!) and you're trying to figure out what you want to do about your living situation. You're going to choose between the following two decisions that almost every American faces and analyzes (usually incorrectly):
  1. Rent a home worth $1MM and save the money you're not spending on rent (or invest it, it doesn't really matter). Suppose this rent costs you $X a month.
  2. Buy a $1MM home and live in it.
What's the difference between these two situations? To see this we will need to leverage our understanding of opportunity cost. In particular, we'll compute your monthly costs when you rent and compare them to your monthly costs when you buy.

The Renting Case:
In the case where you rent, you have an up-front cost of $X per month (cost of the rent) and that's it (easy-peasy). This is money most would consider "thrown away." It's identical to situation (1) where the owner of the home is your boss and renting is "leaving the office." You get charged money to stay in your home (cost), just like you get charged money to leave the office.

The Buying Case:
Oh, in this case you don't have any costs per month because you're awesome and you own the house, right?
No money being thrown away, right? WRONG! Go back and read the opportunity cost section, bud! While it's true you have no "rent" cost, you're forgetting about the opportunity cost of living in the home! If you can rent your home out for $X a month, then you're technically giving up (throwing away) $X per month by living in the home as opposed to renting it out to someone else. And, because money given up is money lost (see the opportunity cost section), you're technically losing $X per month in situation (2) as well, making you no better off than in situation (1). This is hard to wrap your head around but the key is to understand that not taking a renter's money is the same as paying the rent yourself. And just like you throw away money by renting from another person, you also throw away money when you refuse to let other people pay you to live in the home you own. It's identical to situation (2) in the opportunity cost section: you get money taken away up-front (by the old owner/your boss in the example), then you get offered it back if you choose to leave the home (by renters/your boss in the example). Now you give up money to stay in your home (opportunity cost), just like you give up money to leave the office.

In conclusion: when you rent you pay $X a month up-front and when you buy, you still pay $X a month in opportunity cost if you live in your home. Thus, renters and owners throw away equivalent amounts of money and there is no obvious advantage to buying a home vs. renting forever.

A Home is Just an Investment in Real-Estate

So what do you gain by buying a home versus renting forever? The only thing you get is an investment in the real-estate market and nothing more. If you think real-estate is going to boom, buy a house, otherwise, invest in something else and rent. To see why this is the case, we again use an example.

Let's suppose, again, that you've saved up $1MM  and you're ready to decide between two different options:
  1. Use all the money to buy a house and rent it out to someone for $X a month.
  2. Put all the money in the stock market.
In both of these situations suppose you're renting (living in) a different $1MM house and you're paying $X a month. This makes living costs the same in both cases so we don't have to worry about them. So what's the difference between the first and second option? Well, in the first case you're investing in the real-estate market and in the second you're investing in the stock market, but that's about it. In the first case, you'll get rent payments ($X a month, maybe you use them to pay for your rent at the other house), in the second case you'll get dividend payments. In the first case you'll sell your house if you feel like it and in the second you'll sell all your stocks. If the real-estate market looks like a better investment to you than the stock market you'll take option (1), otherwise you'll take option (2) but other than that, one isn't much different than the other. A home is an investment vehicle just like a share of stock when you rent it out.

So what does this have to do with anything? Well, suppose you want to stop paying your $X/month rent and instead take option (1) and live in the house you invest in. Nothing changes. You're still investing in the real-estate market like before only instead of getting $X/month from renting out the home you bought (and paying $X/month for your other place), you give up $X/month in rent for the home you bought (and stop paying $X/month in rent at the other place). Net, you're in exactly the same position you were in as a home renter as you were in as a home owner.

Some Actual Reasons to Buy a Home

So by now you should be convinced that home renters don't throw away more money than home owners and, additionally, that all owning a home affords you (over someone who rents exclusively) is an investment in the real-state market. But, all that said, there are still some very interesting reasons why buying a home could be better than renting one and I'm going to go over them here.

A Mortgage Gives You "Leverage"
Suppose you have $1MM. If you want to invest in the stock market, typically all you can do is use that $1MM to buy a bunch of shares of stock. If the shares go up by 1%, you make a 1% return on your $1MM and that's it. However, when you buy a home, you can typically buy one that's worth much more than the amount of capital you happen to have on-hand. Going with this example, you can use your $1MM to get a $2MM mortgage and buy a $2MM home. Banks like giving loans on homes but they're generally not receptive to people who come in asking for money so they can "invest in the stock market." As such, you can use bank loans to your advantage when investing in real-estate in a way you can't when investing in normal stocks. Why is this good? Because now, if the value of your home goes up 1%, you've actually made a 2% return on your $1MM investment. That is, you're using "leverage" to amplify your returns. Of course, this is true for the downside, too, but if you're damn sure the real-estate market is going to boom, getting a little leverage and buying a home can get you rich pretty quick.

You Can Deduct Interest Payments from Your Taxes
This is one of the biggest advantages to taking on a mortgage in my opinion, mainly because it makes your effective interest rate lower. To understand how it works, let's look at an example.

Suppose you pay $12,000 in interest payments one year. If you make $100,000 that year in taxable income, then you can deduct the $12,000 from your taxable income, which means you'll only pay taxes on the remaining $88,000 of income you earned. If your tax rate is 25%, then without the deduction you pay $100,000 * .25 = $25,000 in taxes. With the deduction, you only pay ($100,000 - $12,000) * .25 = $22,000. A savings of $3,000. In general, the amount you save is (amount you pay in  interest) * (marginal tax rate),

Overall, this has the effect of *effectively* decreasing your interest rate. If you pay R% as your interest rate, and if your marginal tax rate is T%, then your "effective" interest rate because of the deduction is actually R% * (1 - T%), which is lower than R% alone. Pretty nice.

Your Downside Ain't Bad
Suppose you put your $1MM in the stocks and they all went to $0. Now you have $0-- that sucks! But now suppose, instead, you took out a huge mortgage and bought an $8MM home instead. If things work out well, your up-side is magnified 8x, way better than investing in stocks. But what happens if things go bad? Well, if the real-estate market tanks, you can just stop paying your mortgage and let the bank foreclose on your home, putting you at $0 just like you would have been had you invested in stocks (lots of people did this when the housing bubble burst in '08). So, let's compare the two examples again. With stocks, you make a 1% return on investment for every 1% the stocks go up and -1% for every 1% the stocks go down. With a home, however, you make 8% for every 1% the real-estate market goes up, and, if the real-estate market tanks, you make -1% for every 1% the real-estate market goes down when you default on your loan. Overall, this asymmetric payoff can make buying a home look pretty attractive, not that I'm encouraging anyone to default on their loans, which a very serious thing...

Real-Estate Could be a Good Investment
I won't go into this, but if you think real-estate is a better investment than anything else, buying a home is a great way to invest.

One Last Kindof Weird Thing

One side-effect of living your life by the principle of opportunity cost (the correct way to live your life, by the way), is that if you own a home and its value sky-rockets, you should move out. Why? Because your "rent" (the opportunity cost of living in your home) just went up in a sense. Every month you live in your home you're forfeiting ridiculous amounts of lost rent from people who would be willing to pay much more to live in the place than you would. As such, you need to always be asking yourself "what would my rent be if I didn't own this place" and be willing to move to a cheaper area if that number is too high, or risk paying a very high opportunity cost.

Tuesday, June 4, 2013

Problems with GPA and a Possible Solution

GPA should provide a total ordering of all the students being considered based holistically on their performance throughout their academic careers. That's how employers use it when deciding who to interview and that's how colleges use it to assign honors at graduation. In spite of this, however, there are some major problems with GPA that make it an extremely inadequate measure of relative student performance:
  1. Some courses are graded more leniently than others. This distorts student incentives, causing students to pad their schedules with easier courses despite the fact that taking more difficult courses provides more net benefit in the long run.
  2. Some departments grade differently. Arbitrary curve differences across concentrations often exist and cause students to unnecessarily shy away from one concentration over another in an attempt to achieve a higher GPA upon graduation and, thus, in their minds, better employment opportunities. This is undesirable because curve differences often do not reflect actual differences in difficulty, and thus shouldn't factor into a student's decision on what to major in. Comparative Literature, for example, is arguably just as difficult as Theoretical Physics, and yet lighter curves in the former can prevent students from majoring in the latter for no good reason.
  3. GPA discourages students from taking courses with intelligent students. Taking courses with smart students generally lowers one's chances of getting a good grade, since courses are usually graded on a curve. As such, a GPA system can cause students to shy away from upper-level courses because they are inadequately compensated for the increase in difficulty. This is undesirable because students can have much more intellectual conversations when they have more peers at their level.

Various hacks can be imposed upon the traditional GPA system to try and solve these problems. Princeton, for example, uses grade deflation (restricting the number of A's to 30% across all courses) to try and prevent problems (1) and (2) from occurring. But, obviously, by doing so, they exacerbate problem (3), while making the environment feel somewhat draconian, with students competing for A's that will be doled out based largely on noise in the easier courses.

Having thought about the problem of fixing GPA at some length, I reached the following conclusion: GPA does not factor in enough global information to make it a useful measure of relative student performance. That is, in order for a ranking system to be truly robust to all the problems listed above and more, it must be designed to heavily factor in deep inter-student relationships from the ground up.

Having reached this conclusion, I thought of the best global ranking algorithm I know, PageRank, which ranks websites, and thought about how it could potentially apply to the problem of ranking students. After all, all PageRank does is compute the limiting distribution for a strongly connected graph, so why not convert the problem of ranking students into a graph problem.  Below, I describe a scheme that I think would work very well as a ranking system for students. It requires the transcripts of all the students at a particular institution, a tough batch of information to gather, but we'll worry about practicality at the end.

How the algorithm works:

  • Construct a graph where each student corresponds to a vertex in the graph.
  • Go through every student's transcript and do the following:
    • If two students, A and B, took a course together and A scored higher than B, add a directed edge pointing from B to A. This effectively represents an endorsement of student A by student B that occurs as a result of A performing better than B.
    • If two students got the same grade, add two directed edges, one pointing from B to A and one pointing from A to B.
  • Use the graph to construct a (student x student) transition matrix, M, as follows:
    • For each pair of students, (i, j)
      • Let Mij = a/n + (1-a) (c/k) where a is some constant less than one (say .1), n is the total number of students, c is the number of edges leaving i and going into j, corresponding to courses in which student j outperformed student i, and k is the total number of edges leaving student i's node in the graph.
  • At this point, the matrix represents a fully-connected graph, so compute the limiting distribution, which is just the left eigenvector associated with eigenvalue one. The result is a vector of size equal to the number of students.
  • Each entry in the limiting distribution vector corresponds to a student's PageRank score. Since a higher score broadly implies that the student was endorsed more often by students with high PageRanks themselves, the entry for student i in the resulting vector can be used as a proxy for student i's rank.
  • Order the students by their PageRank scores and return this ordering.
This algorithm above (essentially PageRank) treats course outcomes as relative orderings and assembles all of these relative orderings into a single total ordering. I now claim that using the algorithm above to rank students would eliminate the three hazards mentioned at the beginning. Here's how:
  1. Some courses are graded more leniently than others. This wouldn't matter since we are treating course grades as relative orderings. That is, a course that grades leniently will provide less information to the algorithm (since many students will have the same grade) but will not otherwise unfairly affect the resulting ordering. In fact, no matter how leniently a course grades, as long as it grades fairly (a reasonable assumption), it will be no different for students than a course that grades more rigorously.
  2. Some departments grade differently. Again, this is not a problem as long as departments grade students fairly. That is, as long as a student who gets an A did legitimately better than a student who got a B or some other lower grade, departments that as a whole give more A's than others will not adversely affect the ranking of students. In particular, taking a course in a department that grades easier will not affect the student, since his performance will be globally compared to all of his schoolmates anyway.
  3. GPA discourages students from taking courses with intelligent students. This problem is eliminated because of what the limiting distribution represents. In the constructed graph, it is advantageous for one to have more inward edges in general, but it is more advantageous to have inward edges that come from students who themselves have many inward edges. This can be achieved by taking courses with many intelligent students, and so computing an ordering in this way significantly improves one's incentive to participate in higher-level courses.

More broadly, I think PageRank's general ability to factor in global information comes in very handy when considering how to rank students, a problem that I think is a fundamentally global if approached correctly. If you want to learn more about ranking this type of thing globally, check out the "Feedback Arc Set" problem, which PageRank solves heuristically (though not exactly).

Now, however, we run into problems of practicality. Unfortunately, I know of something called FERPA, which prevents crazy nerds like me from just mining all transcripts. But maybe it could be used by companies like Google that get tons of transcripts from students anyway. If Bitcoin has taught me anything, it's that providing the right incentives to people can instantly fix many practical problems (think about how quickly mining power grew with bitcoin compared to fold@home), so maybe actually paying students to provide transcripts, even if they're not applying, could be viable for a large enough firm. Perhaps this could be a particularly tantalizing opportunity for LinkedIn, since it already has all of the infrastructure in place to just slurp up people's transcripts. Like so many ideas in computer science, this one could die because of a lack of data and business model, but if one could figure out a way to get the transcripts and make money off of generating a ranking, it could be pretty cool to see what happens.

In general, I genuinely think that GPA is a poor ranking system for the 21st century, carried over from a time when computing anything more than an unweighted mean was "hard". Even if PageRank isn't the way to go about solving the problem of ranking students, I think something should be thought of to replace GPA, and I think the approach of treating course results as relative orderings and aggregating these rankings into one large ranking of all the students is a reasonable first stab.