The Economist’s 2022 Senate forecast | The Economist

How the Senate forecast works

Explore the findings of our model of the race to control Congress

Last updated on November 8th 2022

October 31st 2022: We have updated the methodology behind our congressional forecast.
Read about the changes to our model here.

Changes to our model: Non-technical explanation

On October 16th we paused updates to our forecast of America’s midterm elections. At the time, Democratic candidates’ leads in polls of many Senate races had begun to narrow, but our model’s estimated win probabilities in these contests did not appear to change in response. For example, it still gave John Fetterman, the Democratic candidate in Pennsylvania, a 91% chance of winning when his lead in polls had dwindled to just over five percentage points, an implausibly high degree of confidence. When we investigated what was causing our projections to react so sluggishly, we uncovered several problems that required considerable time to diagnose.
First, our polling averages turned out to be insufficiently prepared for this year’s polling landscape. Although our model can handle the occasional outlier survey, two pollsters in particular (Center Street PAC and Echelon Insights) had released a stream of polls showing remarkably lopsided margins of victory for Democrats in Senate battleground states. Our model does try to correct for pollsters’ tendencies to publish results favouring a particular party. However, because we did not have records of any publicly released polls of congressional races by these firms in prior years, we could not compare historical examples of their surveys with actual election results. In such cases, our model relied on the faulty assumption that new pollsters as a group would not lean significantly towards one side or the other. The day after Echelon released their surveys in September, our Senate forecast lurched towards Democrats by five percentage points.
Another change in polling this year has been a decline in the number of surveys of individual districts in the House of Representatives. In the past, we could improve predictions of the national popular vote calculated using national-level generic-ballot polls by aggregating such surveys, particularly ones conducted by accredited firms. In 2022, however, there have been very few district polls by accredited organisations, whereas many have been conducted by partisan firms whose bias can be hard to measure. As a result, this element introduced both excess volatility and a potential source of bias to the model’s House predictions.
We also found weaknesses in the design of our model. Rather than taking a simple average of recent surveys, our method incorporates all polls of a given race and places extra weight on the most recent ones. Our implementation of this approach turned out to be too conservative, making our polling averages too slow to adjust to abrupt swings in public sentiment—including the souring of the political environment for many Democratic Senate candidates in October. A final problem was that changes in the Federal Election Commission’s bulk-data files on candidates and parties prevented our script from properly processing third-quarter fundraising data released on October 17th.
We have now fixed these issues. Backtesting shows how much these changed our predictions. Between June 1st and October 16th, we overestimated the probability that the Democrats would hold the House of Representatives by an average of 7.4 percentage points. For the chance of Democrats keeping the Senate, we overstated the probability by a margin of 6.1 points. Our errant 91% estimated win probability for Mr Fetterman on October 16th, which prompted our inquiry, should have been 81%. In the two weeks since, our model has not updated to include the latest polling, much of which is bad news for Democrats: Mr Fetterman’s probability as of October 31st, for instance, is 56%.
We apologise for the delay and the need to revise our model. We have corrected these issues and are now releasing updated projections. In the interests of transparency, a more technical accounting, including comparative model outputs and charts, is available here.

Technical explanation

This is a fairly technical explanation of the changes made to The Economist’s midterms forecasting model after we paused updates on October 16th 2022. A non-technical explanation of these changes, with less statistical jargon, is above.
After our model launched in September, we quickly became aware that two pollsters, Center Street PAC and Echelon Insights, had released barrages of surveys in most competitive Senate races that showed Democrats leading by remarkably large margins. Although our model was built to account and correct for pollsters’ biases, it did so only by analysing the performance of each pollster in prior elections. Because of irregularities with Center Street’s surveys, and the fact that we did not have records of any publicly released polls by these firms of congressional races in prior elections, this system did not work as intended. Our assumption that on average, new pollsters would not tend to favour one side or the other requires Democratic- and Republican-leaning firms within this group to have biases that roughly offset each other. In this case, in late September the biggest outliers were unusually extreme, concentrated on the pro-Democratic side and present in nearly every competitive Senate race.
It became clear that we would need to find a method for fixing this, essentially creating an estimate of pollster-specific “house effects” within a given election cycle. This proved to be a significant challenge, without the ground truth of an election result to anchor a firm’s polls against (and a desire not to mechanically force the polls to herd to a consensus estimate). Our method now is to fit a locally-estimated regression trendline to raw poll results for each race. We then try to predict the pollsters’ results in a regression that uses the trendline value and a dummy variable for the identity of the pollster. The magnitude of these pollster-specific coefficients give us an estimate of each firm’s bias within a given election cycle, which when included in a predictive model substantially improved the accuracy of our polling averages .
In addition, our inquiry into the methodology used by Center Street specifically, which we detailed in an article published in our United States section, led us to conclude that the data-generating process they employ was not statistically sound. We are no longer including their polls in our average as a result. From now on, we will maintain a policy of requesting interviews with any pollster who has with unusually large measured bias over a large number of polls before deciding whether to include them in our averages.
Not all the errors could be attributed to inputs, however. The polling average that we used was designed to put more weight on recent polls rather than older ones. To do this, we tried to implement an optimised time-decay parameter, which lets the weight assigned to any given poll to decline over time in an exponential manner. This was trained on a sub-model to predict the next series of polls. This approach was not particularly well-suited to a race with large swings in polling, some of them the result of outlier pollsters. In this case, the optimised parameter ends up being large in magnitude—meaning that older polls are given a relatively heavy weight in the mix—which leaves the polling average less likely to respond to sharp swings in public opinion. This probably had the greatest impact in the Pennsylvania Senate race, where the polling average too conservatively updated to capture the significant decline in prospects for John Fetterman. To deal with the slow updating rate, we have switched the weighted-averaging method to optimise for predicting actual election results rather than predicting the next poll. We are also now blending that weighted average with a local polynomial regression, which helps the average to pick up sharp trends favouring one side or the other more quickly.
Another problem that emerged was how we attempted to model the race for the House of Representatives. Our methodology intended to aggregate polls conducted at the district level and incorporate it into our forecast for the national popular vote, which is polled more regularly and in “generic-ballot” polls. In the past, district-specific surveys, particularly those conducted by accredited firms, have proven to provide valuable information above and beyond what the generic-ballot polling average indicates. However, as analysts at FiveThirtyEight, a data-journalism outlet, have written, the number of district-level polls has declined markedly in this election cycle—and many of them have come from partisan pollsters whose record of accuracy is less reliable. The dearth of available polls meant that even a single new district-level poll, which may have been of dubious quality, had a large impact. In some cases, one district poll could change our national House topline by three percentage points. Given the lack of reliable data this year, we have jettisoned this portion of the model.
The error that initially caused our model to pause was a more mundane one. The third-quarter fundraising data from the Federal Election Commission that was released on October 17th contained candidates with multiple identification numbers and missing party affiliations, which caused our parsing script to stop working. That was more easily fixed than all the rest of the changes detailed above.
In the interests of transparency, we back-tested the effect that these changes to the model would have on our headline estimates for the probabilities that Democrats keep control of the Senate and the House. The two charts below show the degree of difference that these made. Discarding the national aggregation of district-level polls shifted our House results towards Republicans by around eight percentage points. The impact of the changes on our Senate predictions, particularly during the past two months, has been smaller.
We are also making available the spreadsheets showing the differences in predictions of who will control the House and Senate, as well as some of the most closely watched races during this cycle, such as the Pennsylvania Senate race. We apologise for the errors.

Original methodology

Our forecasting model for America’s Senate elections is trained on every race for a seat in the upper chamber of Congress since 1972, and makes use of data on elections going back to 1942. It calculates its predictions using three basic steps.
The first challenge for the model is to predict an accurate range of outcomes for the national popular vote for the House—the sum of all votes cast for Democratic or Republican House candidates, with an adjustment for seats where one party is running unopposed. (The national popular vote for the Senate is not particularly useful, because one-third of states do not have senators up for election, and states receive equal representation in the Senate regardless of their population.) To calculate this distribution, the model uses data on “generic-ballot” polling—when survey respondents are asked which party’s Congressional candidate they plan to vote for—as well as presidential approval ratings; the average results of special elections held to fill vacant legislative seats; and the number of days left until the election. Using a machine-learning technique called “elastic-net regularisation”, the model finds the combination of these variables that would have produced the most accurate predictions of elections it was not allowed to “see” in its training set.
With this distribution of plausible results for the overall national political environment in hand, the model drills down to the state level. Here, its first challenge is to predict each state’s “partisan lean”—the gap between the election result in each state and the overall national average. Some Senate races are never polled, so the model creates a starting benchmark estimate for each district based exclusively on “fundamental” factors like its historical voting record; whether or not an incumbent is running; the candidates’ fundraising, ideological positioning and past experience running for office; and—crucially—the nationwide popular vote for the House. Rather than assuming that a state’s partisan lean (e.g., seven percentage points more Republican than the national average) is fixed, the model estimates how voters might respond differently to their choices under different political contexts. The model also includes an adjustment for partisan polarisation—which we measure as the share of the electorate that voted in consecutive presidential elections for candidates from different parties—which enables it to distinguish how the relative impact of each variable has changed over time.
After calculating this starting expectation in each state, the model next updates its forecast to incorporate any polls that have been taken of the race. It weights surveys by their methodology—pollsters who call mobile phones using live interviewers and belong to organisations committed to transparency in research get more weight—and by how long before the election they were taken, and adjusts the results of polls sponsored by candidates or political parties to counteract their expected bias. In heavily polled races, the fundamentals-based forecast constitutes just a small share of the final blended average, but the model will continue to rely on it heavily in contests with sparse or unreliable polling. The more polling a race has, the more confident the model can be of its forecast—and thus, the narrower the distribution of potential outcomes around its central estimate will be.
The final step is to combine these three elements into a forecast, by randomly simulating a result in each race 10,000 times. To start the simulation, the model picks 10,000 values at random from its distribution of outcomes for the national popular vote. Most will cluster around the average—so, in a year where Democrats are most likely to get 53% of the vote, the bulk of simulated values will fall between 52% and 54%—but some simulations will produce highly improbable outliers. Each of these results represents one hypothetical national political context in which each Senate race will occur. In presidential-election years, the model also simulates which party will control the vice-presidency, which casts the tiebreaking vote in case of a 50-50 split, based on the simulated national popular vote for the House.
For each of the 10,000 simulations, the model then feeds the simulated national result down to the state level, and calculates a distribution of potential vote shares for each race in each simulation. In scenarios that are unusually good for the Republicans, all of these distributions will shift in the GOP’s direction, but not necessarily by the same amount. The model then proceeds to draw one value at random from each of the 350,000 resulting distributions. Even in simulations where the Democrats romp to victory nationally, Republicans will still pull off some surprising upsets in a few unexpected races.
From there, the model simply counts up the number of seats won by each party in each simulation. Those in which the Democrats win at least 51 seats, or 50 plus the vice-presidency, are recorded as a Democratic victory; the rest go to the Republicans. The probability of victory published on our site is simply the percentage of our 10,000 simulations won by each party.

Already have a subscription? Log in

Subscribe for full access

Explore our entire collection of interactive stories