$ 3,200 – $ 6,500 12 month subscription license
- Create, share and consume with one solution
- Generate engineering-ready geometrical models
- Produce 3D meshes with high quality texture
- Scale to any size without limitations
- Employ parallel computing power for faster results
- Automate 3D mesh generation workflow
- Minimize the learning curve with user-friendly software
Learn more about what the Reality Modeling WorkSuite can do for you...
What is the Reality Modeling WorkSuite?
The Reality Modeling WorkSuite offers surveyors and engineers access to Bentley's most popular reality modeling applications at a discounted price.
Reality capture through 3D photogrammetry or laser scanning can be difficult. Save time with Bentley's solutions and continually produce high-fidelity 3D reality models at any size, scale, and complexity.
Thousands of users worldwide trust Bentley's reality modeling solutions to provide real-world digital context to their mapping, design, construction, inspection, and asset management projects.
The Reality Modeling WorkSuite supports your reality capture workflows and accelerates the decision-making process.
- CAPTURE - Automatically transform photos and/or point clouds into 3D reality meshes with unlimited scale and precision.
- MANAGE - Easily import, view, merge, clean, correct, and manage large amounts of reality data in one secure place.
- SHARE - Improve collaboration when you share beautiful visuals of your model with stakeholders and clients.
- ANALYZE - Use advanced editing tools to automated feature and object detection and extraction to streamline workflows.
Do it all with Bentley’s reality modeling solutions! Save money and access expert training with the Reality Modeling WorkSuite.
The Reality Modeling WorkSuite includes ContextCapture and ContextCapture Editor.
- CAPTURE with ContextCapture
Produce 3D models of existing conditions for infrastructure projects, derived from photographs and/or point clouds. These highly detailed, 3D reality meshes provide precise real-world context for design, construction, and operations decisions throughout the lifecycle of a project.
- MANAGE with ContextCapture Editor
Extend the value of reality modeling data with ContextCapture Editor, a 3D CAD module for editing and analyzing reality data. ContextCapture Editor makes it easy to quickly manipulate meshes as well as generate cross-sections, extract ground and breaklines, and produce orthophotos, 3D PDFs, and iModels. Integrating meshes with GIS and engineering data enables intuitive search, navigation, visualization, and animation of information within the visual context of the mesh to quickly and efficiently support the design process.
- SHARE with ContextCapture Viewer
Better collaborate when you share visuals of the 3D reality mesh with your teams. Easily explore and precisely measure reality meshes of any scale created with ContextCapture, and switch on or off the display of the model to texture and wireframe to better understand existing conditions. Tell a better story by producing movies with a time-based fly-through or create high-resolution screen captures of your 3D reality mesh. ContextCapture Viewer is available as a free download.
Learn more about ContextCapture.
Enhance your workflow to include analysis with the Reality Modeling WorkSuite Advanced. In addition to the software provided in the Reality Modeling WorkSuite, you will also get Orbit 3DM Feature Extraction and Pointools.
- ANALYZE with Orbit 3DM Feature Extraction (Standard)
This software will be available soon.
Navigate through all types and sizes of mapping data in a full 3D view quickly with support for mobile, UAS, aerial oblique, indoor, and terrestrial mapping hardware systems and translate different device setups and specifications into a single user-friendly environment. Orbit 3DM Feature Extraction includes templates for each vehicle setup to save time when importing your mapping data, exporting views and features or creating accurate reports. Learn more about Orbit.
- ANALYZE with Pointools
Visualize, manipulate, animate, and edit point clouds in a single workflow to decrease production time and increase accuracy. Pointools leverages a high-performance point-cloud engine, providing fast access to a high level of detail, layer-based editing and segmentation of data as well as professional images, animations, and clash detection features. Learn more about Pointools.
Reality Modeling WorkSuite vs Reality Modeling Advanced WorkSuite
|Reality Modeling WorkSuite||Reality Modeling Advanced WorkSuite|
|Manage||ContextCapture Editor||ContextCapture Editor|
|Share||ProjectWise ContextShare||ProjectWise ContextShare|
|Analyze||Orbit 3DM Feature Extraction (Standard)|
- Can you mix photos from different sources at different resolutions? E.g. air photos with photos from the ground?
Yes. ContextCapture is the most versatile solution on the market, and automatically extracts details from any resolution photos. You can properly register using the control points in your datasets.
- What about panoramas? Can they be used?
Yes. This is possible by making sure there is more than 60% overlap between 2 successive source photos from the camera used to take the panorama shots.
- Is it possible to use 360 cameras, such as the NCtech iris360? The most popular fisheye cameras (GoPro, DJI…) are supported and in the catalog of cameras. 360 cameras are not recommended for this kind of process as they come with too much distortion.
- Is it possible to use RAW photos (14bits, 16bits, HDR?)
Yes. Most of the camera manufacturers' RAW formats are supported, but currently only the 8-bit channel is used.
- Possible to make 3D from video files?
Yes. ContextCapture accepts videos as an input in MP4/WMV/AVI/MOV/MPEG formats, and automatically extracts a frame according to a user-defined period.
- Can it create models with images fully edited and modified in Photoshop/other programs or does it need the raw image files/info unedited?
No. Photo editing will confuse the software with regard to the estimation of the optical properties. However, masks can be used if a fixed area has to be removed from the photos.
- Can I import calibration parameters for my camera?
Yes. You can import an OPT file, or add a camera to your database and input its specific calibration parameters, such as distortion parameters, principal point, and focal length.
- What is the recommended overlap between photos to achieve better accuracy?
We recommend that every part of the scene is captured in at least 3 neighboring photographs. The overlap must then be more than 60% in all directions.
- What are the requirements of the camera / photos - calibrated, fixed lens etc.?
We recommend using a camera with a reasonable sensor size (in pixels) to reduce the number of photos, and with a fixed focal length. Adjusting the zoom– will confuse the software, as it will have to estimate optical properties for every photo instead of for the entire photogroup (a group of photos with the same optical characteristics).
- Do you need surveyed ground control?
Ground Control Points (GCP) are not mandatory but highly recommended to accurately georeference the model, correct drift on corridor acquisitions, increase precision on the altitude, register different datasets in the same project, etc.
- Should all photos have geotags, or can the software manage with be only one?
A few geotags should be enough, but most of the time, all photos acquired with a camera equipped with a GPS sensor, will have geotags in their EXIF attributes.
- Are photo ID required to stitch together the photos?
There is no need for a structured acquisition if this is the question. Photos may then be shot in any order and will be processed regardless of their IDs. Anyway, having metadata like the Exterior Orientation (EO) and even the Interior Orientation (IO) for every photo will help the software during the Aerial Triangulation process (AT).
- What is the workflow for flying a site to capture images? I've flown a few from about 30meters and wonder how to also capture the vertical faces of buildings. Can I fly a second flight path and incorporate those images into my model easily?
We offer a comprehensive acquisition guide that describes all the best practices to acquire photos for a specific purpose. It is possible to acquire verticals through a first flight, and then acquire oblique and add both to the same project.
- Does ContextCapture have an additional application for flight planning? Specifically, a drone?
Not for now. We believe that our job is to provide the best processing software, and we cooperate with major UAV manufacturers who already have their own mission planning solutions.
- How are you handling the obliquity in the image samples?
The software processes all images regardless of orientation, including oblique imagery, in the same way.
- Does the camera mounted to drone need to have a known coordinate or geolocation in order to represent true scale and location?
There are several ways to scale and georeference a model: through geotags with the photos, ground control points, or by adding manual tie points in the photos (only for the scale in this case).
- How does the software handle the background that is not relevant during the imagery acquisition?
Every static part of the scene that is sharp in appearance, and is not too reflective, will be accurately reconstructed. If a user wants to focus on a specific area, then the best is to create a Region of Interest (ROI) thanks to the dedicated UI.
3D mesh edition
- Are the 3D meshes produced fully editable or do they work like fixed blocks?
ContextCapture produces meshes in various formats and structures. They may come as individual tiles at a particular resolution, in OBJ, FBX, Collada or STL formats, or as a multiresolution mesh, in 3SM, 3MX (Bentley products), OSGB (OpenSceneGraph), LoD Tree, I3S (Esri) and 3DTiles (web viewing) format. The mesh can be edited in OBJ (geometry+ texture) with a third-party application and then re-imported in ContextCapture to re-generate LoDs. An entire 3MX model, at any scale, can be loaded in MicroStation as a reference, and used for design and engineering processes.
- Can landmarks inside the 3d mesh be selected and manipulated?
Reality meshes in 3Dtiles format can be manipulated, classified and annotated in MicroStation, and Bentley RealityData WebViewer
- What is the accuracy of such models?
The global accuracy is about 1-2 pixels (resolution=projected size of a pixel on the scene, also called Ground Sampling Distance for aerial acquisitions) in a plane perpendicular to the acquisition, and 1-3 pixels along the main acquisition direction. Our recent benchmarks show the global accuracy is close to LiDAR, as long as you have a high enough pixel resolution.
- Can accuracy be improved if the camera positions can be surveyed accurately?
Survey points help to accurately georeference the model in latitude/longitude/altitude and also avoid drift through large areas, and thus increase the global accuracy of a model.
- Do you have a recommended list of cameras and types of photos to use to get a certain level of accuracy?
Every acquisition process may benefit from a specific camera system. However, all cameras, from a mere smartphone to highly specialized aerial multidirectional camera systems, are supported by ContextCapture. What is important is the resolution (projected size of pixels on the scene), the sharpness of the photos, and a fixed focal length.
- Are users able to add reflective dot targets to scenes to enhance accuracy?
Reflective dots may help the software on uniform areas (a uniform white wall, for instance), where computer vision and photogrammetry algorithms will struggle. This is not required on areas containing enough level of texturing.
- Can 3MX and 3SM be exported to LumenRT?
Definitely! This will help you to enliven any captured context in minutes. 3SM will be a more optimized format for such a purpose.
- Is there an option to export to CityGML?
Urban area models generated by ContextCapture are assimilated to LOD3 CityGM but do not come with any semantics.
- Can these models be integrated with a 3D scanner? (Leica P40)
Georeferenced 3D models can be overlaid with laser scans. Coming soon ContextCapture will also accept laser scans. This will provide users to get the best of both worlds, either to complete a LiDAR acquisition with photos, or to extract more details and increase precision for the same area.
- Integration with Google Earth?
ContextCapture produces multiresolution meshes in KML format, which can be directly loaded into Google Earth
- What GIS software are ContextCapture outputs compatible with?
OpenCities Map, Esri’s ArcGIS, SuperMap, and more generally, any 3D GIS or visualization software compatible with a multiresolution-tiled format (OpenCities Planner, Unity 3D, OpenSceneGraph, Eternix’ BlazeTerra, etc.).
- Which Bentley software accepts the exported 3D mesh?
All V8i SS4, CONNECT and DGNDB platform compatible products will support the 3MX format. Descartes, Map, ABD, OpenRoads ConceptStation, etc.
- Is there a list of compatible 3D printers?
ContextCapture can export an STL or OBJ format, widely accepted by 3D printers.
- Do you have an idea of a File Size? For example, if I had 200 pictures at 25Megapixels each.
This is about 150MB for a similar project. 100 times lighter than a colored LAS point cloud and 22 times lighter than POD.
- Is it possible to export only a colored point cloud?
Yes. ContextCapture can produce models in LAS and POD point cloud formats.
- Where can I get the list of 3rd party software that we can use after, for example, the water simulation etc.?
For communication purposes, LumenRT will do the job quite nicely. For more technical analysis, OpenFlow brand will be optimized.
- Can Bentley's design tools use the mesh as a surface or terrain, like in power inroads?
Definitely! OpenRoads ConceptStation loads a 3MX as a 3D base map. Descartes can also be used to extract a DTM from our digital surface model.
- Does referencing 3mx files in MicroStation use the standard reference functionality, or does it use raster manager functionality and is this fully integrated in ProjectWise?
MicroStation loads the Spatial Reference System (SRS) used to produce the 3MX file and references the model accordingly. We are working on closer integration with ProjectWise.
- How to combine points generated by scanner and photos?
In ContextCapture there is an “Adjust photos onto Pointcloud” feature that automatically runs alignment before 3D-reconstruction.
- Is it possible to place created 3D models into different geographical zones?
The Spatial Reference System (SRS) can be selected when creating the project and in the viewer (after the production) from a library of 4,000+. This can be a user-defined referential system.
- How can I publish the models to my customers?
Bentley RealityData web-viewer is the most suited path to share with stakeholders. It will display 3DTiles hosted on ProjectWise ContextShare, allowing photo-navigation, annotation, and permission management. ContextCapture also offers a free web-compliant plugin web viewer. This allows you to publish any 3MX model, at any scale, on a web server, making it available to your users via a mere browser.
- For the WebGL Viewer, is it possible to take into account the collision?
Clash detection can be done in MicroStation (on extracts of the mesh), or on point clouds in various solutions but not in a viewer, either web or desktop.
Processing and analysis
- How automatic is the process of processing multiple photos?
This is fully automatic as far as the input datasets are suitable (overlap, sharpness, optical properties, etc.).
- Regarding photo control does the software prepare an AT solution report.
Yes. There is a report at the end of the AT, containing the various RMS values as well as processing parameters. Quality metrics are also viewable in the 3DView.
- When you say "Production time" is that clock time? Or CPU processing time?
The production time is considered as clock time. The average observed production speed is about 15 Gpix per Engine per day.
- How do you georeference the mesh in ContextCapture?
Either through control points or geotags with the photos.
- Is it possible to georeference model by GPS points captured by a camera?
Yes. Indeed, ContextCapture reads the EXIF parameters coming with the photos and extracts the geotags when present, as well as other camera properties, if any.
- How do you add control to photos?
Through a dedicated UI in the software. You can also load them through a text file and then identify them in your photos.
- How do you scale the contents?
Either by georeferencing the model, or by adding manual tie points with a distance value in the editor.
- Is there a function to manually classify the 3D point cloud for 3rd party survey?
Yes, Bentley Descartes and/or Pointools are dedicated to this type of application.
- Do you have object classification tools?
ContextCapture Insights , currently available as Early Access Program (EAP) is able to perform such operations. This is made possible by properly training detectors.
- Do you have direct volume calculations in ContextCapture?
Yes, using the 3D viewer which will be included with ContextCapture. Users can measure coordinates, distances, height differences, areas, and volumes. In the web viewer, only coordinates, distances, and height differences can be measured.
- How do you calculate volumes?
Volumes are calculated by refencing either a mean plane created through the georeferenced selection polygon or to a custom plane at a specific height.
- Are there size limits in terms of MB/GB & poly count?
No. This is what makes ContextCapture so unique. City or bridge models can easily reach dozens of GB on a hard disk and be streamed through a web or local server, thanks to the multiresolution architecture and the optimization of the mesh.
- Are you going to simplify the process for producing orthophoto according to different axes?
This is available in ContextCapture Editor.
- What are the greatest challenges users may encounter in the production of 3D models with ContextCapture?
Photo acquisition! The process is truly straightforward when the photos are appropriate: resolution, sharpness, overlap.
- Is there a method for using this software inside?
The software applies to any photo dataset, aerial, ground, outdoor, indoor, so long as the objects in the scene are static (if moving too much they will be automatically removed), highly reflective or too uniform. The best practice for shooting photos indoors, is to walk sideways, back to a wall and shoot photos in multiple directions to the front (slightly upwards, downwards, rightwards, leftwards). The acquisition guide provides more information on this procedure.
Get more information about the Reality Modeling WorkSuite today...
Bentley’s ContextCapture enabled multi-engine processing using not just images but point clouds, something that no other software provided. It proved to be consequential in completing the project with quality and client satisfaction.Khalid Farid Sallam, Geomatics Manager, Ala Abdulhadi & Khalifa Hawas Consulting Engineering Company
Your peers were also interested in...
Roadway design software
Perform detailed design for surveying, drainage, subsurface utilities, and roadways with construction-driven engineering and analysis.
Rail network design software
Manage a wide variety of complex tasks such as yard/station design, tunnels, corridor modeling, turnout and switch placement, overhead line electrification, site development, sanitary and stormwater network design, subsurface utilities, and production of construction staking reports.
Site design software
OpenSite Designer offers complete detailed design capabilities for rapid site modeling and analysis, earthwork optimization and quantification, drainage and underground utility design, and automates project deliverables.