|Analysis Situs ./features/architecture overview|
All packages of Analysis Situs can be divided logically to the backend and the frontend parts. The contents of each part are schematically shown in the figure below.
To represent Data Model, Active Data framework is used. The Active Data framework is based on OCAF module of OpenCascade kernel. The architecture of an OCAF-based application resembles a typical architecture of an Enterprise application constructed on the top of a database engine. In our case, we have OCAF (Active Data) serving as a hierarchical no-sql in-memory database. At the layer above, all necessary Data Access Objects (DAO) reside. These DAOs bring an object-oriented abstraction to the Data Model. The Data Access Objects are also called Data Cursors (or Data Interfaces) as they do not store any data in their member fields and only point to the corresponding persistent entity (OCAF label) which can be used to read and write actual data. Finally, at the top level of Data Model, we have Services which drive the business logic of the application. It is the role of Service layer to update the contents of the Data Model according to the user inputs.
Programmatically, the DAO layer of the architecture is realized in asiData library. In this library, all object interfaces are declared and implemented. For the Service layer, we have a dedicated asiEngine library. In the Service library, one can find classes like asiEngine_Part or asiEngine_IV which contain the business logic relevant to a specific object type (Part and Imperative Viewer for the mentioned classes).
For a Part object, a piece of business logic may assume recomputation of accelerating structures or AAG in case if the Part gets another B-Rep shape to store. In most cases, it is a good idea to work with a Part (and other object types) via its Service API (i.e., asiEngine_Part) instead of using the DAOs class (i.e., asiData_Part) directly. Calling API functions ensures that the Data Model remains consistent.
An object in Analysis Situs is called a Node following the convention of the Active Data framework. A Node is defined with a list of its Parameters which can be observed in the Parameter Editor panel of the desktop application. Additionally, a Node may have different relations to other Nodes, including:
The visualization module (implemented in asiVisu library) is tightly bounded to the Data Model backbone. For a Node to be represented in a 3D scene, the corresponding Presentation class is created. There is a one-to-one correspondence between the Node types and the Presentation types. Practically, it means that whenever a new Node type is introduced, the dual Presentation class should be created. A Presentation is essentially a collection of Pipelines, where a Pipeline is an abstraction to represent the similar notion of VTK visualization library [Schroeder, 2006].
A Pipeline starts with a Data Source which is aware of geometric primitives such as curves, surfaces, meshes, etc. At the same time, a Data Source is not aware of OCAF. The independency from OCAF at a Data Source level allows us reusing the predefined set of Pipelines in situations when no persistent storage is used or when the storage is variable (e.g., you may store a parametric curve in different Node types while the visualization Pipeline is just the same).
It is also possible to reuse Data Sources in different Pipelines. The latter is especially useful when dealing with large objects. It is a Pipeline object which creates all used data sources, algorithms (VTK "filters"), mappers and actors. By convention, a Pipeline may have only one actor (check VTK documentation for the overview about filters, mappers and actors).
The persistent data is transferred from OCAF to a visualization Pipeline by means of the so called Data Providers. A Data Provider is used to handle variability in the Data Model representations of a specific object which needs to be rendered in a 3D scene. Thanks to the abstract Data Providers, all Pipelines can be kept OCAF-free. The Pipelines only rely on the abstract interfaces of the corresponding Data Providers to feed their Data Sources properly. All details related to the Data Model, specific Node and Parameter types, and the relations between the data objects are encapsulated within the implementations of the specific Data Providers. The correspondence between Pipelines and their Data Providers is managed by Presentation classes. The Presentation classes construct Pipelines, Data Providers and associate them with each other.
All Parameters of all Nodes store their last modification time. This information is used to perform lazy visualization updates. Each Pipeline stores its modification time as a member field. If the modification time of the sourced Parameters is more recent than the modification time of a Pipeline, the Pipeline should be (re)executed to bring data to the up-to-date state. Technically, since a Pipeline is not aware of OCAF and Data Model Parameters, the timestamp check is done by a Data Provider.
To visualize a Node, you need to construct a Presentation object for it. Additionally, it is necessary to specify the 3D viewer where this Presentation will render its Pipelines. The associativity between the Presentations and the Nodes together with a link to the rendering window is managed by the so called Presentation Manager. The Presentation Manager provides the following common services:
There is a one-to-one correspondence between Presentation Managers and 3D viewers. To visualize a Node in a certain viewer, you need this viewer to be managed by a dedicated Presentation Manager class. In Analysis Situs, the base class for all 3D viewers is asiUI_Viewer. This class holds a reference to its corresponding Presentation Manager.
There is a single entry point to the Data Model which is asiEngine_Model class. Using this class, you can iterate all Nodes in the project, find Nodes, delete and copy/paste them. This class is also responsible for such basic operations as Open/Save, Undo/Redo and compatibility conversion between different versions of project files. On start-up, the empty Data Model is created and populated with the predefined structure of Nodes.