Full description not available
W**K
Superforecasting will give you insight into much more than forecasting.
I’ve been reading about Philip Tetlock’s work on forecasting for years and I was impressed. But somehow Superforecasting: The Art and Science of Prediction kept slipping down my “next read” list. That was my loss. I wish I’d read this book years ago.Superforecasting will give you insight into much more than forecasting. You’ll learn a lot about how we make decisions and the role that cognitive biases play. You’ll discover how to lead more effectively. You’ll also discover how we’re improving the way we make forecasts and decisions.Philip Tetlock and Dan Gardner compare the current state of forecasting and decision-making to the state of medicine in the 19th century. Here’s how they phrase it.“All too often, forecasting in the twenty-first century looks too much like nineteenth-century medicine. There are theories, assertions, and arguments. There are famous figures, as confident as they are well compensated. But there is little experimentation, or anything that could be called science, so we know much less than most people realize. And we pay the price. Although bad forecasting rarely leads as obviously to harm as does bad medicine, it steers us subtly toward bad decisions and all that flows from them—including monetary losses, missed opportunities, unnecessary suffering, even war and death.”That sounds dreadful. But the authors think you can improve your forecasting and decision-making. You can learn from what superforecasters do. That’s what Superforecasting is about.Start by paying attention to the process. Increase the number of your information inputs. Learn how to ask pointed questions. Watch out for cognitive biases and what the authors called “bait and switch.” Here’s Philip Tetlock’s description of “bait and switch.”“Formally, it’s called attribute substitution, but I call it bait and switch: when faced with a hard question, we often surreptitiously replace it with an easy one.”Personally, that was one of my powerful takeaways from this book. I’ve become acutely aware of how often I do a bait and switch when I’m analyzing information.Make precise forecasts. Replace the equivalent of, “I think it might rain” with “I think there’s a 70% possibility of rain before 5:00 PM.”Once you’ve done the hard work of developing a preliminary forecast adjust it as you gather more data and insight. Superforecasters adjust their forecasts frequently and in small increments.There’s one more thing you need to do. You need to review your forecasting performance. As with learning and mastering any other skill, you need good feedback and reflection.There’s one more big insight in this book. You’ll make better forecasts if you combine the practices of superforecasters with the practices of people the authors call “super questioners.”That covers the basics of the book, but it doesn’t give you an idea of how rich the material is. Several things in Superforecasting are worth the price of the book all by themselves.The leadership chapter is excellent. There’s a lot of good material about both making good leadership decisions and conveying those decisions to others.The book gives you an excellent discussion of Daniel Kahneman’s systems 1 and 2. As you read the book, you’re also reading an excellent review of cognitive biases.I loved the many historical examples. I learned a lot from analyses of the Bay of Pigs and the Cuban Missile Crises, even though I’ve read a lot about both. The authors tell the story of the CIA analysis of the weapons of mass destruction in Iraq.In a NutshellSuperforecasting: The Art and Science of Prediction will give you insight into much more than forecasting. If you apply what you learn from this book, you will make better forecasts and better decisions. You’ll also be able to improve your leadership and help create more effective teams.
J**R
Scientific approach to prediction
I really enjoyed this book a few years ago, and I have come back to offer a review based on my notes at the time and how the insights have settled for me over time. I took away many key concepts for successfully forecasting uncertain events and also some areas I noted for further exploration. Many of the following notes are structured from the authors' insight into the demonstrated practices of repeatedly successful forecasters.The book mentions repeatedly the importance of measurement for assessment and revising forecasts and programs. Many people simply don't create any metrics of anything when they make unverifiable and chronologically ambiguous declarations.The book emphasizes the importance of receiving this feedback on predictions that measurement allows, as there is a studied gap between confidence and skill in judgment. We have a tendency to be uninterested in accumulating counterfactuals, but we must know when we fail to learn from it. If forecasts are either not made or not quantified and ambiguous, we can't receive clear feedback, so the thought process that led to the forecasts can't be improved upon. Feedback, however, allows for the psychological trap of hindsight bias. This is that when we know the outcome, that knowledge of the outcome skews our perception of what we thought at the time of the prediction and before we knew the outcome.The main qualities for successful forecasting are being open-minded, careful, and undertaking self-critical thinking with focus, which is not effortless. Commitment to self-improvement is the strongest predictor of long-term performance in measured forecasting. This can basically be considered as equivalent to the popular concept of grit. Studies show that individuals with fixed mindsets do not pay attention to new information that could improve their future predictions. Similarly, forecasts tend to improve when more probabilistic thinking is embraced rather than fatalistic thinking in regards to the perspective that certain events are inevitable.A few interesting findings that the authors expand upon in more detail in the book: experience is important to have the tacit knowledge essential to the practice of forecasting, and that grit, or perseverance, towards making great forecasts is three times as important as intelligence.Practices to undertake when forecasting are to create a breakdown of components to the question that you can distinguish and scrutinize your assumptions; develop backwards thinking as answering the questions of what you would need to know to answer the question, and then making appropriate numerical estimations for those questions; practice developing an outside view, which is starting with an anchored view from past experience of others, at first downplaying the problem's uniqueness; explore other potential views regarding the question; and express all aspects and perspectives into a single number that can be manipulated and updated.Psychological traps to be aware of discussed in the book include confirmation bias, which is a willingness to seek out information that confirms your hypothesis and not seek out information that may contradict it, which is the opposite of discovering counterfactuals; belief perseverance, also known as cognitive dissonance, in which individuals can be incapable of updating their belief in the face of new evidence by rationalization in order to not have their belief upset; scope insensitivity, which is not properly factoring in an important aspect of applicability of scope, such as timeframe, properly into the forecast; and thought type replacement, which is replacing a hard question in analysis with a similar question that's not equivalent but which is much easier to answer.Researched qualities to strive for as a forecaster: cautious, humble, nondeterministic, actively open-minded, reflective, numerate, pragmatic, analytical, probabilistic, belief updaters, intuitive psychologists, growth mindset.The authors then delve into a bit of another practical perspective on forecasting, which involves teams. Psychological traps for teams include the known phenomenon known as groupthink, which is that small cohesive groups tend to unconsciously develop shared illusions and norms that are often biased in favor of the group, which interfere with critical thinking regarding objective reality. There is also a tendency for members of the group to leave the hard work of critical thinking to others on the team instead of sharing this work optimally, which when combined with groupthink, leads the group towards tending to feel a sense of completion upon reaching a level of agreement. One idea to keep in mind for management of a group is that the group's collective thinking can be described as a product of the communication of the group itself and not the sum of the thinking of the individual members of a group.There are some common perceived problems with forecasting, which receive attention in the book: the wrong side of maybe fallacy, which is the thinking that a forecast was bad because the forecast was greater than 50% but the event didn't occur, which can lead to forecasters not willing to be vulnerable with their forecasts; publishing forecasts for all to see, where research shows that public posting of forecasts, with one's name associated with the forecast, creates more open-mindedness and increased performance; and the fallacy that because many factors are unquantifiable due their real complexity, the use of numbers in forecasting is therefore not useful.Some concepts that I took note of for further research from the book were: Bayesian-based application for belief updating, which is basically a mathematical way of comparing how powerful your past belief was relative to some specific new information, chaos theory, game theory, Monte Carlo methods, and systematic intake of news media. These are concepts that I was particularly interested in from the book based on my own interests and that I have continued to explore. This book was very valuable for cohesively bringing together the above concepts in the context of a compelling story, based on the DARPA research project which was compellingly won by the author's team as a product of the research that led to this groundbreaking book.
G**L
Very Good Study
Quality guru E. Deming said or wrote; `Without data, you are just another person with an opinion'. This book shows that people who do the homework, who gather data, get better forecasting results than our opinions, alone, ever would. No fuzzy thinking in this book. Excellent.
J**.
Not useful for Fantasy League
I’m not being funny or anything but I bought this book because I thought it would help me sort my fantasy league team out. I’m in second to last place, 3 points ahead of Jimmy the Turk and I’ll level with you I’m worried. Anyway Dom Cummings suggested I buy this book and since he was spot on about the 300m per week from the NHS I though I would go for it. Alas, the last two weeks before lockdown I lost both of my fantasy fixtures despite super forecasting hard all night. I even emailed Dom Cummings to ask for my money back but I superforecast that is unlikely to happen.
O**0
I forecast waffle
This book was recommended to me by a friend because both of us occasionally need to forecast as part of our jobs. In many aspects of this book it was like reading about how granny sucked eggs. A lot of the book covers theory on what makes an accurate forecast. There are some nuggets of insight on good forecasting which is nicely surmised in the appendix with the 11 commandments of Superforecasting.The issue with the book is not the material of the content but the padding. There seems to be a lot of it. This is a 300+ page book that can be edited down to half the size without losing information. Many of the same examples of Superforecasting were repeated more than once.It was funny to read that a lot of businesses are not actually that interested if a forecast is right or wrong provided the forecast tells them what they want to hear. Talking from experience I know this to be true. In addition other forecasters are reluctant to revisit old forecasts in fear of exposing their inaccuracies, which to me, made zero sense and I am glad Tetlock agrees with this view.Overall it is a good read, just nothing special if you do this sort of thing for a living.
A**A
Worth a read
I found this a really interesting book. I was very sceptical at first and the idea of superforecasters sounded like an issue of survivorship bias, but the author did address this satisfactorily. For the most part the book just goes through various types of logical fallacies and how you can avoid these to make more accurate predictions about the future, so if you know a bit about probability and logical fallacies already you won't find it much new. But the story of the Good Judgment Project is very interesting and certainly worth knowing about.Certainly in this time of COVID-19, after reading this book you'll start noticing a lot of public figures fall into basic data interpretation mistakes, make predictions that turn out to be totally wrong, and then continue as normal anyway!
E**F
Learn the pitfalls and tips for forecasting
This book is a great book for understanding forecasting. It explains the methods and personalities of 'Superforecasters', people who Tetlock have studied who consistently outperform experts and non-experts alike in forecasting future events. Not perfect predictions, mind you, but consistently better statistically. What do they do differently than ordinary people to perform so well? Tetlock gives his best explanation in this book.One criticism I have is that I would've liked it to better slightly less "popular" science; include a bit more hard data, remove a little of the padding. However even with this criticism, there was much for me to learn. And it did include substantial references to evidence.Prediction is an extremely important component to testing whether your hypotheses are correct. Therefore, knowing about prediction is a key issue in science. Anyone who cares a lot about science should read a book like this or something similar. For any such person, I would gladly recommend this book.
I**A
Insightful read, but somewhat too long
The book clearly presents the results of a year long and very thorough work of the authors, dealing with how to improve our forecasting skills. It has given me a different perspective on the validity of many forecasts we see in the media and made me reconsider, how I am approaching forecasting in life at at work. However, after reading 50% of it, I felt like the concepts become repeatable, and the author is just trying to prolong the story by dwelling more on the same several key ideas. In any case, I would read it again, as it gives a different perspective on an interesting and important topic.
Trustpilot
3 weeks ago
1 day ago