Build a Species Distribution Model: A Classroom Workshop Using the Butternut Restoration Dataset
SDMGISTeaching Resources

Build a Species Distribution Model: A Classroom Workshop Using the Butternut Restoration Dataset

DDr. Eleanor Whitcombe
2026-05-14
25 min read

A teacher-friendly SDM workshop using butternut data, open climate and soil layers, reproducible R code, and assessment rubrics.

Species distribution models are one of the most useful tools students can learn when they want to connect ecology, climate science, and conservation decision-making. In this workshop, learners build a practical model for butternut restoration using open climate and soil layers, then interpret the result as if they were advising a forest manager. The exercise is grounded in a real restoration challenge: butternut, a native North American tree, has been devastated by butternut canker, yet new research shows that climate and soil conditions can help identify where resistant trees and hybrids are most likely to succeed. That makes the topic ideal for a classroom workshop because it combines real data, spatial thinking, and ethical restoration planning.

If you want a broader teaching context before starting, this workshop sits neatly alongside our guide to designing sensor-based experiments for statistics and modelling, our lesson on teaching feedback loops, and a practical approach to decision-making with data systems. The core idea is simple: students do not just learn what a species distribution model is, they build one, critique it, and use it to make a restoration recommendation.

1) Why Butternut Is a Powerful SDM Classroom Case Study

A conservation story with real stakes

Butternut is a close relative of black walnut, valued for its timber, nuts, and role as a mast tree that supports wildlife. The tree has nearly vanished from many eastern forests because of an invasive fungal disease, but the species has not disappeared completely. Researchers working with climate, soil, and genetic information have identified where resistant butternut trees and resistant hybrids are most likely to survive, including parts of southern Indiana, western Kentucky, western Michigan, and New England. That is exactly the sort of evidence-based conservation story that gives a species distribution model meaning beyond the screen.

The real educational power comes from the fact that the model is not abstract. Students can see how a tree species’ future depends on environmental suitability, not just good intentions. They can also discuss uncertainty, because no model is perfect, and the same map that helps restoration can also reveal where planting is likely to fail. That tension is what makes the lesson authentic, and it is why the butternut example is stronger than a toy dataset built for demonstration only.

What students learn technically

This workshop teaches the full modelling workflow: defining the ecological question, assembling presence/absence or hybrid-tolerance data, preparing climate and soil layers, sampling predictor values, fitting a model, evaluating accuracy, and turning predictions into a restoration map. Depending on your class level, you can use either ArcGIS Pro or R. If you want a planning mindset for what comes next, this is similar to satellite intelligence for risk management or to worked examples that turn raw inputs into decision-ready estimates.

Students also practise scientific reasoning: they must justify predictor choice, explain why climate and soil matter, and evaluate whether the model is useful for restoration planning. Those skills transfer well to geography, environmental science, biology, and data science courses. For classrooms that want to connect modelling to communication, the lesson can be paired with event-led content structures so students present their findings as a briefing, poster, or policy memo.

Source-grounded scientific context

The news study behind this lesson found that temperature, precipitation, and soil carbon helped distinguish sites suitable for resistant butternut trees and hybrids. That matters because trees do not respond to climate alone. Soil moisture, nutrient status, organic carbon, and local edaphic conditions can all support or constrain regeneration. When students include both climate and soil layers, they are approximating how real restoration scientists think: species ranges are shaped by multiple interacting environmental filters, not a single map layer.

2) Workshop Overview: What You Will Build

Two modelling pathways

There are two main options for the classroom. The first is a classic presence/absence species distribution model: students use occurrences of butternut or resistant trees, plus background or absence points, to estimate the probability of suitable habitat. The second is a resistance-tolerant hybrid mapping workflow: students compare sites where resistant trees or hybrids are known to occur against less suitable sites, then build a map of restoration potential. Both methods are valid, and both can be taught in a way that highlights data literacy and ecological interpretation.

In lower-secondary or introductory classes, the presence/absence route is easier to explain because it uses familiar binary outcomes. In more advanced classes, a hybrid tolerance map can introduce ideas like classification, weighting, and thresholding. You can also use both and ask students which is more defensible for a restoration decision. That discussion is especially valuable because model choice is never neutral; it shapes which landscapes are prioritised and which are ignored.

Suggested learning objectives

By the end of the workshop, students should be able to describe what a species distribution model does, prepare predictor layers, run a basic model in ArcGIS Pro or R, interpret output maps, and justify a restoration recommendation using evidence. They should also be able to explain at least two limitations of the model, such as sampling bias or coarse environmental resolution. If your class is exploring reproducible workflows, you may also want to compare this lesson with reproducible benchmarking approaches and with data-access auditing practices, because both reinforce transparency and responsible analysis.

Workshop deliverables

The final student products can be a suitability map, a short written interpretation, and a one-slide management recommendation. For assessment, you can add a methods worksheet and a reflection on model uncertainty. For enrichment, ask students to propose a follow-up field survey. That move brings the lesson into the same practical mindset as systems that affect the physical world: if a map changes planting decisions, it needs robust evidence and careful interpretation.

3) Data Sources: Climate Layers, Soil Data, and Butternut Records

Where to get predictor layers

The simplest classroom SDM uses openly available climate layers such as long-term average temperature and precipitation, plus soil variables such as pH, drainage, carbon, and texture. In the US context, students can use WorldClim or CHELSA climate rasters and SoilGrids or SSURGO-derived soil information, depending on access and resolution needs. The key teaching point is not the brand name of the dataset but the logic: the model should include variables that plausibly limit butternut establishment and persistence.

For a UK-based classroom, it is worth briefly noting that the same workflow would work with UKCEH or Environment Agency layers if you wanted to model a local species. That comparison helps students understand the portability of spatial modelling. It also links nicely to our explainer on trend tracking because the data workflow depends on choosing the right signals, not the most data for its own sake.

Presence, absence, and hybrid tolerance data

The response variable is the outcome students are trying to predict. In a classic SDM, this is usually presence versus absence, though true absence data can be hard to obtain for rare species. In this butternut workshop, you can use verified occurrence points for resistant trees or occurrences of butternut within restoration sites, and then generate pseudo-absence or background points where appropriate. If you are teaching hybrid tolerance mapping, you can encode classes such as resistant butternut, likely hybrid, and unsuitable site.

This is a useful moment to discuss why ecologists often work with imperfect data. Rare species are under-sampled, and absence can mean “not surveyed” rather than “not present.” That distinction is a natural way to introduce scientific uncertainty. It also gives students a chance to practise careful inference, much like readers must do when evaluating complex claims in public interest campaigns that may have hidden incentives.

Preparing the classroom dataset

Before the workshop, the teacher should prepare a tidy table of presence points with latitude, longitude, and a class label, plus the raster layers for climate and soil. Make sure the coordinate reference system matches across layers and that all rasters are clipped to the same study area. For teaching, keep the study area manageable: the eastern United States is large, so a regional subset focused on the Midwest and Northeast can make processing faster and the story more concrete.

If you want to make the lesson feel even more like professional science, add a data dictionary and an attribution note for every layer. This is a good habit for all science classes, and it mirrors the documentation culture behind trust signals and responsible disclosures. Students learn that good modelling is not only about algorithms; it is also about provenance and reproducibility.

4) Choosing a Workflow: ArcGIS Pro or R

ArcGIS Pro pathway

ArcGIS Pro is a strong choice if your school already uses Esri tools or if you want a highly visual workflow. Students can load occurrence points, stack raster layers, run raster sampling, and create a suitability map using built-in geoprocessing tools. For many teachers, the visual interface reduces barriers because learners can see each step on the map rather than only in code. That makes it an excellent option for mixed-ability groups.

The main strengths of ArcGIS Pro are speed, map-based interpretation, and fewer coding prerequisites. The trade-off is that students may not see the full model logic as clearly as they would in code, so it is worth annotating every step and exporting the settings used. If you are teaching future-facing geospatial workflows, this links well with how computing hardware shapes analytical capability, because geospatial modelling often depends on both the software and the processing environment.

R pathway

R is the best choice if you want reproducibility and transparent code. A scripted workflow also makes it easier for students to rerun the model, test different predictors, and understand each transformation. Packages such as terra, sf, pROC, and either randomForest or maxnet are enough for a practical classroom model. This path is ideal for advanced secondary students, undergraduates, and teacher-training groups.

R also makes it straightforward to extend the lesson into assessment. Students can submit scripts, knitted reports, or annotated notebooks. That structured workflow resembles the discipline used in memory-efficient application design: if the inputs and processes are explicit, the outputs become easier to audit and improve.

Which option should you choose?

If your priority is accessibility, choose ArcGIS Pro. If your priority is transparency and repeatability, choose R. If you have time, use both: let one group build the model in ArcGIS Pro and another reproduce it in R, then compare outputs. That comparison is a powerful lesson in method equivalence and implementation differences. It also prepares students for real-world science, where the same ecological question may be tackled with multiple tools.

WorkflowBest ForStrengthsLimitationsClassroom Use
ArcGIS Pro SDMVisual learners, GIS beginnersPoint-and-click, strong map outputLess transparent than codeIntro or mixed-ability classes
R logistic modelStudents learning reproducibilityFully scripted, easy to rerunRequires coding supportAdvanced secondary or university
Presence/absence modelCore ecology teachingSimple outcome, easy to explainAbsence data can be noisyMost classes
Hybrid tolerance mappingExtension or enrichmentCaptures restoration nuanceMore complex to justifyHigher-level classes
Thresholded suitability mapDecision-making tasksClear restoration zonesThreshold choice affects resultsPlanning and assessment

5) Step-by-Step Classroom Workshop Procedure

Stage 1: Introduce the ecological problem

Start with the story of butternut decline and ask students why some trees survive while others do not. Then show the class a simple map of the study region and ask them where they think restoration should happen. This opening question helps them see that models are tools for answering practical conservation questions. It also mirrors the decision process behind community risk management using satellite intelligence: where should limited effort go first?

Next, define the response variable. For a presence/absence model, explain that each point is labelled 1 if resistant butternut or a target occurrence is observed, and 0 if the site is unsuitable or absent. For a hybrid map, explain the categories and why restoration managers may accept some hybridisation if it improves persistence. This is a chance to discuss conservation trade-offs honestly rather than pretending all restoration is simple.

Stage 2: Prepare predictor layers

Have students inspect the climate and soil layers and predict which variables might matter most. Temperature influences growth and frost risk, precipitation affects water availability, and soil carbon often acts as a proxy for soil fertility and biological activity. Ask them to think about interactions: a site may be climatically suitable but still fail if the soil is too shallow or poorly structured. That is the ecological equivalent of a system working in theory but failing in practice because one hidden constraint is ignored.

Before modelling, students should standardise extent and resolution. This is not glamorous, but it is essential, because rasters with mismatched cell size or alignment can produce misleading results. For a good teaching analogy, compare this to auditing cloud access: if you do not know what is connected to what, your outputs are less trustworthy.

Stage 3: Sample predictor values and fit the model

In ArcGIS Pro, sample raster values at the occurrence points, split the data into training and test subsets, and fit a classification or suitability model. In R, students can extract raster values at points and fit a logistic regression or random forest. Logistic regression is a good first model because the coefficients are interpretable. Random forest is a useful extension because it can capture nonlinear relationships, though students should be warned that predictive power does not automatically equal ecological insight.

Encourage students to write down a modelling question before they click anything: “Which climate and soil conditions best predict resistant butternut occurrences?” That sentence keeps the analysis focused. If you want to sharpen the comparison between approaches, point them to benchmarking practices, because model performance should be judged with clear metrics, not impressions.

Stage 4: Evaluate accuracy

Students should calculate at least one accuracy metric, such as AUC, confusion matrix accuracy, or sensitivity and specificity. A model that predicts every site as suitable may look optimistic on a map but will perform poorly in practice. The assessment discussion should emphasize that managers need both presence prediction and false-alarm control, because planting in the wrong place wastes resources. This is where science becomes policy-relevant.

Ask students to compare their training and test scores. If training performance is much better than test performance, the model may be overfit. That is a useful statistic to include in reports because it teaches learners that a beautiful map can still be a weak model. For an analogy outside ecology, think of cross-checking market data: a single number is not enough if the underlying process is unstable.

Stage 5: Convert output to a restoration map

Finally, students turn model probabilities into a classified suitability map. They can use terciles, a chosen threshold, or a management-driven cutoff such as the top 20 percent of cells. Ask them to justify the threshold in writing. This step is where the lesson becomes restoration planning rather than only modelling.

Now students can answer the manager’s question: Where should resistant butternut be planted, where should monitoring happen, and where is restoration unlikely to succeed without further intervention? If you want a strong interdisciplinary extension, link the exercise to scenario-based forecasting and compare ecological suitability with other kinds of spatial decision-support systems. Students will see that the logic of evidence-based planning crosses subjects.

6) Reproducible R Tutorial

Core code template

Below is a classroom-ready starter script. It assumes you have a point file with a binary response column named presence and raster layers for climate and soil. The exact data sources can vary, but the workflow remains the same. Teachers should pre-test the script and simplify the file paths for their lab environment.

library(terra)
library(sf)
library(dplyr)
library(pROC)
library(randomForest)

# Load occurrence data
pts <- st_read("butternut_points.gpkg")
pts_v <- vect(pts)

# Load predictors
bio1 <- rast("bio1.tif")   # mean annual temperature
bio12 <- rast("bio12.tif") # annual precipitation
soilC <- rast("soil_carbon.tif")
pH <- rast("soil_pH.tif")
preds <- c(bio1, bio12, soilC, pH)

# Extract raster values to points
vals <- extract(preds, pts_v)
dat <- bind_cols(st_drop_geometry(pts), vals[, -1]) %>%
  na.omit()

# Split train/test
set.seed(42)
idx <- sample(seq_len(nrow(dat)), size = 0.7 * nrow(dat))
train <- dat[idx, ]
test  <- dat[-idx, ]

# Fit model
m_rf <- randomForest(as.factor(presence) ~ bio1 + bio12 + soilC + pH,
                      data = train,
                      ntree = 500,
                      importance = TRUE)

# Predict on test
prob <- predict(m_rf, newdata = test, type = "prob")[,2]
roc_obj <- roc(test$presence, prob)
auc(roc_obj)

This script is intentionally compact so that students can follow it line by line. A logistic regression version is even more interpretable:

m_glm <- glm(presence ~ bio1 + bio12 + soilC + pH,
             data = train,
             family = binomial())
summary(m_glm)

For a raster prediction map, students can create a prediction function or use terra::predict() after ensuring the model accepts raster inputs. The key teaching message is that reproducibility depends on naming files consistently, recording assumptions, and saving the session info. This is one reason code-based modelling pairs well with lessons like feedback loops in classroom technology: the process can be inspected, repeated, and improved.

Interpreting coefficients or variable importance

Ask students to inspect whether higher soil carbon increases suitability or whether precipitation has a nonlinear effect. If using logistic regression, coefficients can show direction; if using random forest, variable importance can show which predictors matter most. However, importance scores should never be treated as ecological truth on their own. Students should always connect them back to forest ecology and the biology of butternut.

For advanced classes, ask them to test whether correlated predictors are inflating importance. Two climate variables that describe almost the same pattern may make the model harder to interpret. That conceptual warning parallels competitive intelligence workflows, where more signals can sometimes make analysis noisier, not better.

Reproducibility checklist

Students should save the data sources, the CRS, the date of download, the random seed, and the modelling settings. They should also note any data cleaning choices, such as removing NA values or thinning clustered points. If time allows, ask them to write a three-sentence methods summary that another student could reproduce without asking questions. That habit is a cornerstone of scientific reliability and aligns with the transparency values seen in responsible disclosure practices.

7) ArcGIS Pro Classroom Workflow

Map setup and layer management

Start by adding the point file and raster layers into a project. Students should check symbology, confirm the projection, and inspect whether the rasters align properly. Then use sampling tools to attach predictor values to each point. The visual workflow makes it easy for students to understand that a model begins with geography, not statistics alone.

Once the table is assembled, students can run a classification or suitability analysis. Depending on the version of ArcGIS Pro available, the exact toolchain may differ, but the lesson objective is the same: identify environmental combinations associated with the target class. Ask students to create a layout that includes a legend, scale bar, north arrow, and clear map title. Good cartography matters because the final map is part of the argument.

Teaching interpretation in GIS

Students often assume that a strong-looking map must be a strong model, so guide them to inspect uncertainty and performance statistics. Show them the difference between the training points and the predicted surface. Then ask where the model might be overconfident. This helps students understand that GIS is not just map-making; it is analytical reasoning with spatial data.

To strengthen the decision-making element, have groups compare two thresholds and discuss which one would be more cautious for restoration. One threshold might maximise the amount of land available, while another might reduce failure risk. This mirrors the way planners think in real projects, much like risk mapping for floods and wildfire where the chosen cutoff changes operational priorities.

Exporting student work

Have students export the final map as PDF and create a one-page explanation for a forest manager. The explanation should state the model input layers, the strongest predictors, the main limitation, and one management recommendation. This turns the lesson from a software exercise into a communication task. If your class likes presentation formats, you can tie the output to event-style briefing structures, which encourage concise, audience-focused reporting.

8) Assessment Rubric and Student Success Criteria

Suggested rubric categories

A good SDM workshop should assess both process and interpretation. Students need credit for correct technical steps, but they also need credit for thinking like scientists. The rubric below can be adapted for GCSE, A-level, undergraduate, or teacher-training contexts. Keep the language clear so students know what strong work looks like.

CriterionExcellentSecureDevelopingPoints
Question framingClear restoration question and correct target classQuestion mostly clearQuestion vague or incomplete5
Data preparationLayers aligned, documented, and cleanMostly correct with minor issuesMultiple errors in setup10
Model executionAppropriate model fitted correctlyModel fitted with minor supportMajor step missing or incorrect15
EvaluationUses metrics and interprets performance thoughtfullyUses at least one metricLittle or no evaluation15
Ecological interpretationExplains predictors and limitations wellBasic interpretation presentWeak or generic interpretation20
Restoration recommendationActionable and evidence-basedReasonable recommendationRecommendation unclear15
Communication and visualsMap/report clear and professionalAdequate clarityHard to follow10
ReproducibilityCode/files fully documentedMostly reproduciblePoor documentation10

Formative checks during the lesson

Use quick questions during the workshop to catch misconceptions early. Ask students why they chose those predictors, what could bias the occurrence data, and what would happen if they changed the threshold. These short prompts help the teacher assess whether students understand the science or are only following steps mechanically. They also encourage metacognition, which improves retention.

You can also use peer review, where students swap maps and check whether the legend, title, and interpretation match the evidence. That peer-to-peer checking is especially effective in data-rich lessons. It resembles the practical wisdom behind cross-checking uncertain information: never rely on one view only.

Assessment extensions

For more advanced learners, add a short critique of model transferability. Could the same model be applied to a different region or future climate scenario? Why or why not? This pushes students toward systems thinking and long-term ecological planning. It also opens the door to discussing resilience, which is central to restoration science and to many of the articles in our science-and-data hub, including sensor-based data collection and forecasting from limited inputs.

9) Common Pitfalls, Troubleshooting, and Teaching Tips

Problem: Too few points

Rare species datasets are often small, and that can frustrate students who expect large sample sizes. Explain that small datasets are common in conservation science and require careful handling rather than abandonment. Students can use cross-validation, simplify the predictor set, or combine occurrences across verified sources. The lesson here is that imperfect evidence is still usable if analysed honestly.

If the class struggles with missing data, give them a cleaned starter dataset and reserve the raw-data version as an extension. This scaffolding strategy keeps the class moving while preserving rigour for learners who are ready for more challenge. The teaching principle is similar to the one behind keeping classroom conversation diverse when AI is common: structure supports better participation.

Problem: Overfitting or “too perfect” maps

If students produce a map that looks plausible but scores unrealistically well, ask them to inspect the training-testing split. The model may be memorising the data instead of learning general ecological patterns. Encourage them to reduce predictor count, lower model complexity, or increase spatial separation between train and test samples. This is a valuable lesson because many real-world models look better on paper than they perform in new locations.

Overfitting is a useful concept to compare with hardware-dependent performance: more compute does not automatically produce better insight. In ecology, interpretability and transferability often matter more than raw fit.

Problem: Students confuse correlation with causation

Remind learners that the model identifies associations, not proof that a soil variable directly causes survival. Some variables may stand in for others, such as temperature matching elevation or soil carbon matching forest age. Ask students to distinguish between explanatory language and predictive language. That distinction is one of the most important lessons in data science.

To reinforce this, ask each group to write one sentence beginning with “The model suggests…” and another beginning with “The model proves…”. Then have them revise the second sentence until it becomes scientifically defensible. This exercise strengthens both accuracy and scientific writing.

10) Classroom Variations, Enrichment, and Real-World Connections

Cross-curricular ideas

This workshop can be adapted for geography, biology, environmental science, and computer science. In geography, it supports spatial analysis and land-use planning. In biology, it teaches species ecology and adaptation. In computer science or data science, it introduces classification, evaluation, and reproducible workflows. The shared benefit is that learners see data as a way to answer meaningful questions rather than as isolated numbers.

If you want a broader environmental systems lens, connect the lesson to climate adaptation and biodiversity planning. Students can compare butternut restoration with another threatened species, or with a local habitat restoration project. This makes the activity more than a one-off modelling exercise and turns it into a reusable template for environmental decision-making.

Fieldwork extension

Students can design a field survey plan to validate the model. Which sites would they sample first? What evidence would confirm or challenge the suitability map? This moves the lesson from desk-based analysis to empirical thinking. It also helps learners appreciate that models guide fieldwork, and fieldwork in turn improves models.

For teachers interested in a technology-plus-fieldwork pairing, consider a small GPS or mobile-data activity alongside the workshop. The same basic logic appears in sensor-based experiments, where students collect, clean, and interpret data from the real world. The result is a more memorable and coherent science sequence.

Future climate scenarios

If your students are ready for extension work, have them repeat the model using future climate layers. Ask whether restoration zones shift northward, shrink, or expand. Then discuss the practical consequences for tree nurseries and conservation policy. This future-oriented step is a natural bridge from modelling to planning, and it gives students a concrete example of how science informs adaptation.

11) Conclusion: Turning Spatial Data into Restoration Decisions

A species distribution model is not just a map; it is a decision-support tool. In this butternut workshop, students learn how climate layers, soil data, and occurrence records can be combined to guide restoration planning for a tree that is under severe ecological pressure. They also learn that modelling is a process of judgment: choosing variables, checking assumptions, testing accuracy, and communicating uncertainty. Those are the habits of mind that make science useful.

The butternut case study is especially effective because it is real, urgent, and teachable. It shows why resistant trees matter, how hybrids can complicate conservation choices, and how spatial data can focus limited restoration resources where they will have the biggest impact. If you want to pair this guide with other lessons on evidence, systems, and reproducibility, explore our related resources on satellite-based risk mapping, access and audit transparency, and benchmark-driven evaluation.

Most importantly, students leave with a model they can explain. That ability to move from raw data to defensible action is the heart of restoration planning, and it is exactly why species distribution models deserve a place in modern classrooms.

FAQ

What is a species distribution model?

A species distribution model predicts where a species is likely to occur based on environmental conditions such as climate, soil, and topography. In this workshop, students use those predictors to estimate where butternut restoration is most likely to succeed. The model can be used for presence/absence predictions or for mapping restoration suitability. It is a practical way to combine ecology with data analysis.

Do students need advanced coding skills for the R version?

No, but they do need enough support to run a prepared script and edit a few lines. A teacher can scaffold the lesson by providing file paths, a starter dataset, and a partially completed script. For beginners, ArcGIS Pro may be more accessible, while R is better for reproducibility and transparency. Both workflows teach the same ecological logic.

Which climate and soil layers work best?

Use layers that are ecologically relevant, well documented, and available at a compatible resolution. Mean annual temperature, precipitation, soil carbon, and soil pH are good starting points for butternut. The best choice depends on your lesson goals, the study extent, and the data you can access reliably. Avoid overloading the model with too many correlated variables.

How do we evaluate whether the model is good?

Use a test set and calculate metrics such as AUC, sensitivity, specificity, or a confusion matrix. Then ask whether the predicted map makes ecological sense. Good performance means more than a high score; it means the model can guide restoration decisions with reasonable confidence. Students should also discuss uncertainty and limitations.

Can this workshop be adapted for UK species?

Yes. The same workflow works for native UK plants, trees, or habitats if you replace the North American layers and occurrence data with UK-appropriate datasets. This is one of the strengths of species distribution modelling: the method transfers across regions, provided the data are suitable. It is an excellent way to teach both environmental science and reproducible spatial analysis.

What should students hand in at the end?

A strong submission includes the map, a short methods summary, a brief evaluation of model performance, and a restoration recommendation. If possible, ask for the code or workflow notes so another student could reproduce the result. That combination of product and process makes the assessment fair and scientifically meaningful.

Related Topics

#SDM#GIS#Teaching Resources
D

Dr. Eleanor Whitcombe

Senior Science Editor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

2026-05-14T00:31:29.005Z