Microsoft’s research arm has long been fascinated with working with large amounts of data at scale. Projects like TerraServer explored how to search and display geospatial data, mixing mapping and demographics to show how we could provide large amounts of information to users’ desktops while working within the bandwidth constraints of the early internet.
That research continues, and projects move into the commercial side of the business as they mature. Sometimes, however, they straddle the boundary between Microsoft supporting external academic customers and providing tools that can be brought into your own commercial projects.
Introducing Planetary Computer
One such tool is Planetary Computer, a set of geospatial data from various providers that’s free to use, along with a set of standards-based APIs for querying and displaying that data, along with SDKs to simplify application development. Some of the tools are now available for use with your own and commercial data sources as Planetary Computer Pro, but the open research-oriented platform is an excellent primer for using massive data sets to add deeper insights into your own and public data.
Microsoft is positioning Planetary Computer as a tool for building environmental applications, using data to watch population, pollution, plant cover, weather, and more, with data that can be used as part of its AI for Good machine learning program.
Planetary Computer isn’t a single application. Instead, it’s a framework for bringing multiple data sources together to build and deliver a collection of different geospatial environmental applications. At its heart is a catalog of curated data sources from a mix of commercial, academic, and government organizations, along with the necessary APIs to query and use that data, as well as ways to display results.
APIs are based on the STAC (SpatioTemporal Asset Catalogs) specification. You can search by coordinates and by time, allowing you to track changes over time, for example, tracking the foliage in a specific area of rainforest over weeks, months, or years. STAC is designed to be provider-agnostic, so the queries you have built to work against one source in Planetary Computer’s catalog can be repurposed to another as you add more sources to your application.
Geospatial data in data science projects
Planetary Computer is a data science platform first and foremost, so much of the documentation focuses on working with Python and R to extract and analyze STAC data. Microsoft provides libraries to help with this, but if you’re looking for a quick way to work with it, there is also a simple data API that uses a URL query to extract data and deliver it in a pre-rendered PNG format ready for display in an application.
Another option generates TileJSON format responses. Like the pre-rendered image call, this delivers a more complex response as a tile that can then be displayed using tools that can parse the TileJSON data returned by the API. Unlike a standalone call, this is an interactive response. As you move around the map, new data will be generated and loaded without you needing to write code to handle the query. The host application’s TileJSON support will deliver this for you.
One possibility is the Folium Python toolkit, which will layer your Planetary Computer results on a base map. The default is Open Street Map, which will allow you to build public-facing applications without navigating often complex commercial mapping licensing.
The API supports more complex queries too, with the ability to build mosaics that cover larger areas. Again, data is returned as TileJSON format tiles and you can use the same techniques to display the results.
Jumpstart analysis with Explorer
A useful part of the Planetary Computer tool set is its Explorer. This helps you quickly display a data set from the service’s catalog. It’s a relatively simple application with two panes. The first allows you to pick a data set from the catalog and then choose a date and the information you want to display. The other is a map view using a familiar mapping control where you can zoom into a place and overlay your selected data. Some of the data in the catalog is surprisingly up-to-date. For example, at the time of this writing in early March 2026, Landsat imagery for the United Kingdom was available up to the end of February. However, not every data set is global, and the Explorer allows you to see what is available for the locations you want to explore.
A useful feature of the Explorer is the ability to use different data sets as different layers. For example, you can show the relationship between leaf cover and ground temperature in urban and rural areas. Alternatively, you can display different temporal ranges from the same data set to see changes over time, such as showing flood patterns for a river or the effects of deforestation.
The Explorer is perhaps best thought of as a basic prototyping tool. It allows you to bring together data in a way that shows how information from different sensor platforms can be combined to give insights and help make policy decisions that can affect entire countries.
From Explorer to applications
Once you’ve built a visualization in Explorer, it’s a matter of clicking the “code snippet for search results” button to get the necessary Python to implement the same search in your code. This gives you a helpful tutorial on how to use the STAC API, building a polygon for searching and then getting data. The snippet is only for search; it will not include the code to build a visualization.
Planetary Computer offers public access to its data, so there’s no need for a token for most queries. However, some of its data is stored in Azure Blob storage, and here you will need a token to include in your queries. This can be generated by a call to a token endpoint in the data set storage account. Calling the endpoint will return the necessary token for all queries to that data set, along with an expiration time that manages how long a token is cached in your application before making a new request.
Microsoft does apply rate limits to its data, and these depend first on whether queries are coming from outside the Azure region that hosts Planetary Computer and if your query includes a token. For best performance, always use an API key and host your applications in West Europe. Much of the required functionality is built into the Planetary Computer Python library, which includes a function that signs requests and manages token caching for you.
Working with Codespaces
If you want a quick way to start using Planetary Computer, Microsoft suggests forking the project’s sample GitHub repository and then using it as the basis of a Codespace environment. To get the best performance, make sure it’s running in West Europe and then launch a Dev Container based on any of the sample environments to start building your own applications.
If you prefer familiar desktop geographical information systems tools, there’s the option of adding a STAC plug-in to the open source QGIS to explore and analyze the data in Planetary Computer’s catalog. This gives you a quick way to mix its data with your own to test hypotheses and get information to support other applications, perhaps to help understand historical patterns for agriculture or planning.
It’s good to see Microsoft supporting pure research and education with tools like this; research teams need access to good data and the ability to bring it into their own applications. At the same time, offering a large-scale service like this gives Microsoft an effective way to monitor and improve Azure’s own services, with a known set of data and APIs that can provide the necessary telemetry and observability to help evaluate new storage and networking features.
Go to Source
Author: