By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Cookie Policy and Privacy Policy for more information.
While large language models (LLMs) may struggle to generate insights directly from time-series data, their strong pattern-recognition capabilities make them well-suited for identifying and synthesizing already existing insights. We therefore argue that an architecture featuring a separate graph-based domain model and time-series data storage is the ideal architecture for a generalized AI companion serving the heavy asset industry: An agentic AI can first identify relevant nodes (assets, sensors, and a large variety of pre-computed models) for answering a user prompt using a mixture of APIs, query language generation, semantic search, and graph traversal. Only after this filtering step is time-series data accessed and retrieved to synthesize an answer.