Cloudy with a chance of errors: Using Machine Learning to predict rain/no-rain patterns in Singapore

Chua Chin Hon
10 min readMar 17, 2019

--

Can machine learning tools accurately predict weather patterns? It’s tough, even in Singapore where the weather doesn’t experience seasonal extremes.

Is it going to rain or not? It’s not always this obvious.

Singapore’s weather can be unpredictable, even if it doesn’t experience seasonal extremes. As part of a self-assigned project on machine learning fundamentals, I sought to answer a simple question: Can you predict whether and when it would rain over the next three months, given a big enough set of historical data?

The Meteorological Service Singapore’s website has daily weather data going back to 1980. For this project, I put together a weather dataset for the 10 years between January 01 2009 and December 31 2018.

There are tonnes of machine learning (ML) models out there. I chose two for this project — a Random Forest Classifier and Facebook’s Prophet forecasting tool. The objective is to try to make accurate predictions about the rain/no-rain pattern for the first three months outside the dataset, that is, January, February and March 2019.

My methodology, broadly:

  • Use the 10-year dataset set to “train” the Random Forest model in classifying rain/no-rain days, using daily mean temperature and mean wind speed as features.
  • Use Prophet to predict the daily mean temperature and windspeed for Jan-Mar 2019.
  • Feed the Prophet’s predicted data into the Random Forest Classifier to see if the classifier will accurately predict the upcoming rain/no-rain days. Compare the results when actual weather data can be fed into the model.

This is a work in progress. The initial results seem to swing wildly from month to month. I’ll be curious to see how this works out in a full year.

INITIAL RESULTS

It seems unfair to make a casual reader wade through a lengthy technical explanation, so let me share the headline results first before explaining the methodology and data further.

Bottomline: Even with 10 years of data, my model struggled to accurately predict the monthly rain/no-rain patterns between Jan 01 — Mar 31, the first 90 days outside the dataset.

In January 2019, Singapore experienced 9 days of rain while the remaining 22 days saw no rain. My model could only accurately predict January’s rain/no-rain pattern for 10 out of 31 days.

For February, it rained on 5 days, which the model completely failed to predict(even if it accurately predicted that it won’t rain for 23 days). Results visualised for January and February 2019 (full weather data for March would be available on April 10th):

The model was wildly off for Jan 2019, predicting a pattern no one in Singapore would recognise.
The model improved somewhat for Feb 2019, but could not pick out any of the 5 rainy days, after predicting a whopping 16 in Jan 2019.

Here are the results visualised another way, via a pair of confusion matrixes:

For January: The model only managed to accurately predict 2 days of rain (Jan 08 and Jan 11) and 8 days of no-rain. It wrongly predicted on 14 occasions that it would rain when it actually didn’t, and similarly forecasted wrongly that it won’t rain on 7 days when it in fact rained on those 7 days. Its accuracy for this month was a dismal 0.32 (true positives + true negatives/positives + negatives)

For February, the driest month of the year for Singapore, the model predicted zero days of rain. But it in fact rained on 5 days — Feb 02, Feb 17, 18, 19 and 27. The model’s accuracy improved to 0.82 for this month, but it still failed to accurately pick out rainy days.

For March, the model predicted 8 days of rain. I’ll update this section when full data for March are in.

TROUBLESHOOTING

As part of a limited troubleshooting exercise, I ditched the Prophet’s predicted values (more on this later) and used actual recorded weather data for January and February 2019 to see if the results from the Random Forest Classifier would be any better. There was indeed a marked improvement:

The model was able to make better rain/no-rain day predictions with actual weather data than predicted values.
With actual weather data, the model was able to predict 2 out of the 5 rainy days in Feb 2019.

This time round, with actual weather data, the model was able to accurately classify 21 out of the 31 rain/no-rain days in January 2019 (compared to just 10/31 previously). For February, it could accurately classify 25 of the 28 days (compared to just getting 23 no-rain days right previously).

Visualising the revised predictions via confusion matrixes again:

I guess it’s no surprise that the model works better with actual data (duh). But this is a learning exercise, and there are useful lessons here for me on the iterative process behind data science, which requires multiple rounds of changes and re-thinking while trying to avoid the rabbit holes.

Here’s the link to my Github repo for this project: https://github.com/chuachinhon/new_weatherML

I’ll dive a little more into the data and model in the subsequent sections.

1. DATA PREPARATION

The Met Service’s website has daily records of the weather in Singapore going back to January 01 1980. At first glance, there seems to be a wealth of data points on offer.

But the reality is that several categories are directly related to each other (such as mean, max and minimum temperature), and hence of limited use in machine learning models. The project would have benefitted from the availability of more independent variables, such as measures of humidity and barometric pressure:

Snippet of the cleaned-up weather set used in this project.

Fun historical nugget: In the 10 years between January 01 2009 and December 31 2018, Singapore had an almost equal number of rain/no-rain days — 1,801 rainy days and 1,851 no-rain days, to be precise. In other words, there is no major imbalance in the data and the model is not aimed at detecting a statistically rare event, such as cancer.

2. PROPHET’S PREDICTIONS

The 10-year dataset was used to train the Random Forest Classifier model to forecast rain/no-rain days. However, in order to predict the rain/no-rain pattern for the first 90 days outside the dataset, the model would have to be given predicted temperature and windspeed values via another forecasting model.

For this I turned to Facebook’s Prophet forecasting tool. FB uses this tool for its own internal forecasts and claims that it “works best with time series that have strong seasonal effects and several seasons of historical data”. A detailed explanation is available via this blog post and academic paper.

The tool’s ease of use and the built-in visualisation functions are also a boon for newcomers like me. Examples:

The black dots represent actual data, the bright blue line represents the predictions by Prophet.
Left: Breakdown of Prophet’s forecast components for mean daily temperature. Right: Similar breakdown for daily rainfall. The charts confirm some well-known trends about the weather in Singapore, such as daily temperature rising to a peak in June, and rainfall rising noticeably in November and December.

I merged the Prophet predictions into a separate CSV file for the next step. The big question, of course, is whether they are accurate or not. As it turns out, Prophet’s values are on the low-side.

On average, Prophet’s predicted mean temperatures for January 2019 were lower than actual values by 1.1°C. Prophet’s predicted daily mean windspeed for the same month were 2 km/h slower than actual measured values, on average. For February, daily mean temperatures predicted by Prophet were 0.6°C lower than actual measured values on average, while daily mean windspeed were 0.81km/h slower than actual measured values.

I tried to change some of the parameters in Prophet to get more accurate predictions but was unsuccessful.

Prophet’s predictions for daily mean temperature had a mean absolute error of 0.8°C on average (see notebook 3.1), while the predictions for windspeed have a mean absolute error of 1.9km/h on average (see notebook 5.1). Unfortunately I lack the domain knowledge about weather forecasting to say if these error margins ought to ring alarm bells before proceeding further.

3. RAIN OR NOT?

I ran the Random Forest Classifier through a dummy classifier and an initial set of parameters before fine-tuning it with a Grid Search. Details are in the notebook “6.1”.

The model’s metrics are not fantastic. But they are not disastrous either. Here’s a look at the details:

  • Precision: This can be seen as a measure of exactness, ie, of the days that the model classified as rain/no-rain, what percent was correct?.
  • Recall: This can be seen as a measure of completeness, ie, for all the days that were actually rainy/dry, what percent was correctly classified by the model? It was clearly better at classifying no-rain days, as we have seen earlier.

Here’s a more detailed explainer of precision and recall: “In simple terms, high precision means that an algorithm returned substantially more relevant results than irrelevant ones, while high recall means that an algorithm returned most of the relevant results.”

The model’s low recall score for rainy days point to the problem we’ve seen in its ability to accurately pick out rainy days.

The model’s performance can also be visualised via a confusion matrix:

The model predicted that it would rain on 319 days, which turned out to be actual rainy days (true positives). On the flip side, it classified 239 days as “no-rain” days and was correct(true negatives).

However, it wrongly predicted that it would rain on 214 days when it in fact did not rain (false positives). It also falsely predicted that it won’t rain on 141 days (false negatives) when it in fact did rain.

The model has an accuracy of 0.61, as calculated by (true positives + true negatives)/(positives + negatives). Here’s a better explanation of the evaluation measures from the confusion matrix.

As noted earlier, the model’s performance is not great but not a disaster either. What then accounts for the terrible predictions for Jan 2019? I tried to do some simple troubleshooting.

4. TROUBLESHOOTING

The Prophet’s predictions are the most likely “culprits” in this case. Here are the predicted values (left) lined up against the actual recorded weather data (right) for January 2019:

Pardon the slight misalignment.

Prophet’s predictions for mean daily temperatures are consistently lower than actual data except for one day — Jan 18 2019 — when it rained heavily. Prophet’s parameters can be tweaked but I could not produce predictions for January that consistently rose above 27°C.

The model’s performance improved when actual data was used, as we have seen in the earlier sections. So getting more accurate predictions out of Prophet would be key to improving the Classifier’s performance if we want to predict the rain/no-rain pattern for the next 90 or 180 days in 2019.

5. CONCLUSION + THOUGHTS

Without better predictions from Prophet or some other time series model, attempts to make long-range predictions via this method would be error-prone.

In other words, I might be able to get a decent prediction if I were to use today’s recorded weather data to see if it would rain or not the next day. But predictions of the pattern for rain/no-rain that are several months out into the future would be problematic, as they would require predicted weather data which would contain errors.

This project can be further overhauled in several ways:

  • Tweak Prophet to try to make the predicted values for temperature and windspeed more accurate.
  • Use a different model to predict the Jan-Mar weather values instead of Prophet.
  • Obtain more weather data, such as humidity or barometric pressure. Unfortunately these aren’t consolidated in the Met’s historic daily records. Ultimately, it is hard to accurately predict for rain with just data for mean wind speed and temperature.
  • Introduce new columns on moving average/sum via feature engineering.

These steps would require more time and add to the bloat of this project. As a starter project to help me understand the basics of machine learning and think through some real-world challenges/limitations, I think it has served its purpose.

I intend to apply the lessons to future projects, but look forward to feedback and suggestions from others who might take on the unresolved questions in this project/dataset. All errors are mine, naturally.

Even the pros have admitted that there are limits to what weather forecasting can achieve at this point with established methods. So this is definitely an area worth revisiting as machine learning methods become more advanced.

Resources for this project at a glance:

My Github repo: https://github.com/chuachinhon/new_weatherML

Historical weather data on the Met’s website: http://www.weather.gov.sg/climate-historical-daily/

Special thanks to Benjamin Singleton for his help in this project and answering a bunch of my questions along the way.

--

--

Chua Chin Hon
Chua Chin Hon

Written by Chua Chin Hon

Building and reviewing AI products for newsrooms

No responses yet