Geo2SigMap achieves efficient and precise RF signal mapping via a cascaded U-Net architecture, which is
composed of U-Net-Iso and U-Net-Dir for generating coarse path gain (PG) maps and fine-grained signal
strength (SS) maps, respectively. Specifically, the first U-Net generates a PG map that embeds the environmental information,
and the second U-Net further refines this process and generates the fine-grained SS map by incorporating directivity and
link budget information, as well as an additional input of a sparsely sampled SS map sampled across the same area.
The model is trained on purely synthetic ray tracing data
generated using our 3D scene generation workflow and NVIDIA's Sionna RT.
Therefore, no real-world measurements are required during the training phase.
The training set features a 6.41 million km^2 area in North America, from which a total number of 27,176 512m x 512m areas
with a building-to-land ratio of at least 20% are selected to generate the building map and PG map datasets used to train the
cascaded U-Net model. When the pre-trained model is employed to predict
the detailed SS map for a specific area, we incorporate a few field measurements that serve as the sparse SS map input to the
second U-Net. Such a design effectively streamlines the model's applicability across different areas and eliminates the need for retraining the
entire model for different geographical settings.
We evaluate the model performance via a real-world measurement campaign, where three types of UE collect cellular information
from six LTE cells operating in the citizens broadband radio service (CBRS) band (3.55–3.7 GHz), deployed on the Duke University
West Campus. Using customized Android apps and Python scripts, we collect over 45,000 measurements, each including various key
cellular metrics such as the physical cell ID (PCI), reference signal received power (RSRP), and reference signal received quality (RSRQ).
Evaluation results show that our model achieves an average root-mean-square-error (RMSE) of 6.04 dB for predicting the RSRP
at the UE across the six LTE cells, representing an average improvement of 3.59 dB compared to existing RF signal mapping methods
that rely on statistical channel models, ray tracing, and ML approaches.