Evaluating the Effectiveness of Large Language Models in Representing Textual Descriptions of Geometry and Spatial Relations (Short Paper)

2023 Leibniz-Zentrum für Informatik (Schloss Dagstuhl) 14,006 citations

Abstract

This research focuses on assessing the ability of large language models (LLMs) in representing geometries and their spatial relations. We utilize LLMs including GPT-2 and BERT to encode the well-known text (WKT) format of geometries and then feed their embeddings into classifiers and regressors to evaluate the effectiveness of the LLMs-generated embeddings for geometric attributes. The experiments demonstrate that while the LLMs-generated embeddings can preserve geometry types and capture some spatial relations (up to 73% accuracy), challenges remain in estimating numeric values and retrieving spatially related objects. This research highlights the need for improvement in terms of capturing the nuances and complexities of the underlying geospatial data and integrating domain knowledge to support various GeoAI applications using foundation models.

Keywords

Computer scienceTask (project management)Language modelNatural language processingSentenceArtificial intelligenceWord (group theory)Simple (philosophy)Linguistics

Related Publications

Finding Structure in Time

Time underlies many interesting human behaviors. Thus, the question of how to represent time in connectionist models is very important. One approach is to represent time implici...

1990 Cognitive Science 10427 citations

Publication Info

Year
2023
Type
preprint
Citations
14006
Access
Closed

External Links

Citation Metrics

14006
OpenAlex

Cite This

T. B. Brown, Benjamin F. Mann, Nick Ryder et al. (2023). Evaluating the Effectiveness of Large Language Models in Representing Textual Descriptions of Geometry and Spatial Relations (Short Paper). Leibniz-Zentrum für Informatik (Schloss Dagstuhl) . https://doi.org/10.4230/lipics.giscience.2023.43

Identifiers

DOI
10.4230/lipics.giscience.2023.43

Data Quality

Data completeness: 77%