Add reliable, up-to-date reality data to all your projects

The Reality Modeling WorkSuite is a sales bundle that offers you access to Bentley’s most popular reality modeling applications ContextCapture, Orbit 3DM Feature Extraction, Pointools, and ProjectWise ContextShare, at a discounted price. Reality capture through 3D photogrammetry or laser scanning can be difficult. Save time with Bentley’s solutions and continually produce high-fidelity 3D reality models.

Reality Modeling WorkSuite, ContextCapture Editor

End-to-end solution for adding digital context to your projects

Bentley’s reality modeling software can handle any size, and from any source including point cloud, imagery (including 360), textured 3D mesh, and traditional GIS resources. You can integrate and combine all your reality data, from any source, into one single digital context. Easily visualize and navigate 3D mapping data real-time in full 2D and 3D. Take advantage of automated measurements and extract features for asset  inventory, floor plan building, and asset verification and attribution. Add real-world context throughout the lifecycle of projects in design, construction, and operations.

Testimonial_Quote_Marks
“Bentley’s ContextCapture enabled multi-engine processing using not just images but point clouds, something that no other software provided. It proved to be consequential in completing the project with quality and client satisfaction.”

Technical Capabilities

Reality Modeling WorkSuite

 

Reality Modeling WorkSuite Reality Modeling WorkSuite Advanced
ContextCapture Included Included
ContextCapture Editor Included Included
ProjectWise ContextShare Included Included
Orbit 3DM Feature Extraction Standard Included
Bentley Pointools
Included

ContextCapture and ContextCapture Editor

Produce 3D models of existing conditions for infrastructure projects, derived from photographs and/or point clouds. These highly detailed, 3D reality meshes provide precise real-world context for design, construction, and operations decisions throughout the lifecycle of a project. Develop precise reality meshes affordably with less investment of time and resources in specialized acquisition devices and associated training. You can easily produce 3D models using up to 300 gigapixels of photos taken with an ordinary camera and/or 3 billion points from a laser scanner, resulting in fine details, sharp edges, and geometric accuracy. Dramatically reduce processing time with the ability to run two ContextCapture instances in parallel on a single project.

Extend your capabilities to extract value from reality modeling data with ContextCapture Editor, a 3D CAD module for editing and analyzing reality data, included with ContextCapture. ContextCapture Editor enables fast and easy manipulation of meshes of any scale as well as the generation of cross sections, extraction of ground and breaklines, and production of orthophotos, 3D PDFs, and iModels. You can integrate your meshes with GIS and engineering data to enable the intuitive search, navigation, visualization, and animation of that information within the visual context of the mesh to quickly and efficiently support the design process.

ContextCapture Technical Capabilities

Input
  • Multiple camera project management
  • Multi-camera rig
  • Visible field
  • Infrared/thermal imagery
  • Videos
  • Laser point cloud (3 billion points for ContextCapture, unlimited for ContextCapture Center)
  • Surface constraints: imported from 3rd party or automatically detected using AI
  • Metadata file import
  • EXIF
Calibration / Aerotriangulation (AT)
  • Automatic calibration / AT / bundle adjustment
  • Parallelization ability on ContextCapture, ContextCapture Center and ContextCapture Cloud Processing Service
  • Project size limitation (300GPIX for ContextCapture, unlimited for ContextCapture Center)
  • Control points management
  • Block management for large AT (only available on ContextCapture Center)
  • Quality report
  • Laserscan/photo automatic registration
  • Splats display mode
Georeferencing
  • GEOCS management
  • Georeferencing of generated results
  • QR-Codes, April tags, and Chili tags: Ground control points automation
Scalability
  • Tiling
Computation
  • GPU based
  • Multi-GPU processing based on Vulkan (optional)
  • Background processing
  • Scripting language support / SDK
Editing
  • Touch-up capabilities (export/reimport of OBJ/DGN)
  • Orthophoto visualization
  • DEM / DSM visualization
  • DTM extraction
  • Cross-sections
  • Contour lines (with Scalable Terrain Model)
  • Point cloud filtering and classification
  • Breaklines extraction
  • Modeling feature
  • Support of streamed reality meshes
  • Create scalable mesh from terrain data
  • Volume calculation
Output and Interoperability
  • Multiresolution mesh (3MX, 3SM and Cesium 3D Tiles)
  • Bentley DGN (mesh element)
  • 3D CAD Neutral formats (OBJ, FBX)
  • KML export (mesh)
  • Esri I3S / I3P
  • Other 3D GIS formats (SpacEyes, LOD Tree, OSGB)
  • 3D PDF
  • AT result export (camera calibration and photo poses)
  • DEM / DSM generation
  • True orthophoto generation
  • Blockwise color equalization
  • Point cloud (LAS, LAZ, and POD)
  • Input data resolution texture mode
  • AT quality report
  • Animations (fly-through video generation)
  • QR code: 3D spatial registration of assets
Viewing
  • Free ContextCapture Viewer
  • Web viewing
Measurement and Analysis
  • Distances and positions
  • Volumes and surfaces
  • Input data resolution
  • Photo-navigation tool
Bentley CONNECT
  • Upload to ProjectWise ContextShare
  • Reality mesh streaming from ProjectWise ContextShare
  • Associate to CONNECT project
  • CONNECT Advisor

ProjectWise ContextShare

ProjectWise ContextShare is a cloud service for storing, managing, and sharing reality data. Better collaborate when you share visuals of the 3D reality mesh with your teams. ContextShare, a cloud-based service, extends Bentley’s ProjectWise connected data environment to securely manage, share, and stream reality meshes, and their input sources, across project teams and applications increasing team productivity and collaboration. It enables you to stream large amounts of reality modeling data without the need for high-end hardware or complex IT infrastructure. ContextShare is accessible through Bentley’s software applications, such as MicroStation, Bentley Descartes, and much more.  UAV companies, surveying, and engineering firms leveraging reality modeling in-house can quickly access their 3D reality meshes generated with the ContextCapture.

ProjectWise ContextShare Technical Capabilities

  • Annotate reality meshes with many types of information
    Improve collaboration and communication with the annotation feature. Add value to your reality meshes by creating annotations with information, such as a name, a description, and clickable links to online resources.
  • Collaborate in real time
    Capture and share real-time changes; collaborate on latest documents anywhere and anytime over mobile and desktop devices.
  • Connect project participants through an instant-on cloud service
    Use a secure cloud-based portal to work from any location and gain built-in data backup and recovery, without needing to install or maintain any software. Connect your entire supply chain with ease, using the secure platform to efficiently collaborate without opening your firewall.
  • Find documents quickly
    Access your latest files and favorite content based on your requirements for accurate document retrieval.
  • Employ trusted file sharing
    Eliminate the redundancy and confusion often caused by documents stored on multiple sites and with different applications.
  • Put files in a project context
    Create and manage file sharing with user, workgroup, and team folder structures.

Orbit 3DM Feature Extraction Standard

Navigate through all types and sizes of mapping data in a full 3D view quickly with support for mobile, UAS, aerial oblique, indoor, and terrestrial mapping hardware systems and translate different device setups and specifications into a single user-friendly environment. Orbit 3DM Feature Extraction includes templates for each vehicle setup to save time when importing your mapping data, exporting views and features or creating accurate reports.

Learn more about Orbit 3DM Feature Extraction.

Orbit 3DM Feature Extraction Standard Technical Capabilities

  • Support all hardware systems
  • Import and optimize data
  • Full 3d viewing
  • Use and overlay vector data
  • Use raster data
  • Measure points, lines, areas
  • Copy to feature
  • Volumetric analysis
  • Contour lines
  • Profiles and cross sections
  • Client/server workforce enabled
  • Manage vector-data themes
  • Semi-automated point cloud measurement tools
  • Workflow and speed procedures

Bentley Pointools

Visualize, manipulate, animate, and edit point clouds in a single workflow to decrease production time and increase accuracy. Pointools leverages a high-performance point-cloud engine, providing fast access to a high level of detail, layer-based editing and segmentation of data as well as professional images, animations, and clash detection features. Pointools allows you to view, animate, and edit point clouds in stand-alone workflows. It enables you to intuitively cleanup and prepare point clouds to make them easier to reuse in downstream applications. You can streamline scan-to-model workflows by importing point clouds from all major scanner manufacturers.

Learn more about Bentley Pointools.

Bentley Pointools Capabilities

Object Import
  • Point-cloud import from: ASCII, Pointools v1.0-1.1 POD, Terrascan BIN, LAS, LAZ, e57, Leica PTX and PTS, Faro FLS and FLW, Riegl 3DD, RPX, RDB and RSP, Optech IXF, Topcon CL3, and DeltaSphere 3000 RTPI
  • 3D model import from: 3DStudio 3DS, Lightwave LWO, Wavefront OBJ
  • Drawing import from: AutoCAD DXF and DWG, Esri shapefile
Project File Management
  • Stores current project settings, layers, saved views, tool states,and animation setup
  • Auto-save
Interface and Navigation
  • Dockable menus
  • Object Tree manager
  • Navigation modes: zoom, pan, free, examine, explore, and light direction
  • 3D Connexion mouse support
  • Walk mode navigation using arrow keys
General Visualization
  • Display settings: frame rate, overall and object bounding box, axis, grid, refresh methods, and camera transition
  • Environment settings: backdrop types and lighting
  • Stereo anaglyph, interlaced or generic quad-buffered display
  • Saved Views manager
Point-cloud Visualization
  • Cloud and layer shading
  • RGB (color) and intensity shading: intensity contrast and brightness control, and point lighting with material quality control
  • Plane shading: distance, offset, axis, fit to data, and edge
  • Dynamic LOD control
  • Clipbox (to isolate area of interest)
3D Model Visualization
  • Shaders: cel-shaded, outline (pen-line like effect), and lighting
  • 3D model display: render as points if needed, lighting, texturing, cel-shade, outline, double-sided, cull back face, and Z-order transparency
Drawing Visualization
  • Anti-aliased display
  • Per layer visibility control
Notes
  • Attachment of documents, images, movies, and other multimedia content to geographic locations
  • Notes management in folders
  • Display styles: color, box style, font type, style, and size
  • Display options: depth cue, show full note, show coordinates, collapse notes with distance, and position on viewport edge
  • Hyperlink support: Internet URL, file, saved views, animation frame, play animation, and run script Point-cloud Differencing and Clash Detection
  • Static and dynamic clash detection between objects
  • Detailed in-viewport clash region display
  • Point-cloud differencing
Point Editing
  • 2D point selection: rectangle, polygon fence, and plane
  • 3D point selection: cube brush, sphere brush, and point cloud from point
  • Selection operations: select all, deselect all, and invert
  • Selection constraints: by color, intensity, and grayscale
  • Point visibility operations: hide selected, unhide all, and invert visibility
  • Support for up to 128 point layers
  • Layer operations: add, remove, hide, show, switch layer mode (single or multi-layer), apply random layer colors, reset, setup LAS layer, point cloud per layer, layers from classification, layers from selected classification, and move points to layer
  • Layer manager
  • RGB painting: direct application via brush tools and layer-wide operations via layer filter
  • RGB color selector: current color, extract color from viewport, swatch (colors storing), color selector (based on hue, saturation, luminance), color channel slider, transparency, and nine blending modes
  • Color application using sphere or cuboid brush
  • Active layer filters: erase paint operations, fill points, reset RGB editing, brightness/contrast, hue/ saturation, and white balance
  • Stack-based editing (stores point-related operations and previous editing operations can be applied to other data)
Object Manipulation
  • Two in-viewport rotating and movement modes: Axis Arrow and Rotate rings
  • Local or world transform mode (keep object’s own coordinate system or apply model coordinate system)
  • Pick tool (click any object from viewport to select it)
  • Large object reference
Measurement Tools
  • Point measure (single point position)
  • Distance measure (point-to-point distance)
  • Measure log options: save to delimited ASCII file, clear, display settings (color, coordinate and distance units, and precision), and tag (code applied to next measurement)
Rendering Snapshots
  • Size options: image dimension and scale, output file size, and calculate print size
  • Selection of area of interest
  • Info options (for orthographic projection): minor and major ruler markings, or as grid, scale text, datum line, title, camera position and direction, and add logo or image
  • Output options: anti-aliasing level and file format (TIFF, JPG, EPIX)
Stereoscopic Viewing
  • Viewing via anaglyph glasses, auto-stereoscopic screens, or any quad-buffer-based stereo device
  • Display options: depth control and amount (of depth effect), and swap left/right cameras
Animations and Movies
  • Animation wizard (fly-through, orbit)
  • Timeline user interface, with animation controller/parameters and keying mode
  • Panning and zooming of the timeline
  • Undo/redo timeline operations
  • Animation options: scale animation and camera path import/export
  • Animation of 3D objects, camera, clip box, light, and settings
  • Object and camera transformation parenting
  • Object and camera motion scripting
  • Graph editor: fine-tuning of keyframe position, key value editing, and interpolation methods (stepped, linear, Catmull-Rom spline, and TCB spline)
  • Control of timeline settings
  • Image sequence, AVI, or MOV movie output
  • Stereo left and right output
  • Tiled output
Reality Modeling WorkSuite

Reality Modeling WorkSuite

 

Reality Modeling WorkSuite Reality Modeling WorkSuite Advanced
ContextCapture Included Included
ContextCapture Editor Included Included
ProjectWise ContextShare Included Included
Orbit 3DM Feature Extraction Standard Included
Bentley Pointools
Included
ContextCapture and ContextCapture Editor

ContextCapture and ContextCapture Editor

Produce 3D models of existing conditions for infrastructure projects, derived from photographs and/or point clouds. These highly detailed, 3D reality meshes provide precise real-world context for design, construction, and operations decisions throughout the lifecycle of a project. Develop precise reality meshes affordably with less investment of time and resources in specialized acquisition devices and associated training. You can easily produce 3D models using up to 300 gigapixels of photos taken with an ordinary camera and/or 3 billion points from a laser scanner, resulting in fine details, sharp edges, and geometric accuracy. Dramatically reduce processing time with the ability to run two ContextCapture instances in parallel on a single project.

Extend your capabilities to extract value from reality modeling data with ContextCapture Editor, a 3D CAD module for editing and analyzing reality data, included with ContextCapture. ContextCapture Editor enables fast and easy manipulation of meshes of any scale as well as the generation of cross sections, extraction of ground and breaklines, and production of orthophotos, 3D PDFs, and iModels. You can integrate your meshes with GIS and engineering data to enable the intuitive search, navigation, visualization, and animation of that information within the visual context of the mesh to quickly and efficiently support the design process.

ContextCapture Technical Capabilities

Input
  • Multiple camera project management
  • Multi-camera rig
  • Visible field
  • Infrared/thermal imagery
  • Videos
  • Laser point cloud (3 billion points for ContextCapture, unlimited for ContextCapture Center)
  • Surface constraints: imported from 3rd party or automatically detected using AI
  • Metadata file import
  • EXIF
Calibration / Aerotriangulation (AT)
  • Automatic calibration / AT / bundle adjustment
  • Parallelization ability on ContextCapture, ContextCapture Center and ContextCapture Cloud Processing Service
  • Project size limitation (300GPIX for ContextCapture, unlimited for ContextCapture Center)
  • Control points management
  • Block management for large AT (only available on ContextCapture Center)
  • Quality report
  • Laserscan/photo automatic registration
  • Splats display mode
Georeferencing
  • GEOCS management
  • Georeferencing of generated results
  • QR-Codes, April tags, and Chili tags: Ground control points automation
Scalability
  • Tiling
Computation
  • GPU based
  • Multi-GPU processing based on Vulkan (optional)
  • Background processing
  • Scripting language support / SDK
Editing
  • Touch-up capabilities (export/reimport of OBJ/DGN)
  • Orthophoto visualization
  • DEM / DSM visualization
  • DTM extraction
  • Cross-sections
  • Contour lines (with Scalable Terrain Model)
  • Point cloud filtering and classification
  • Breaklines extraction
  • Modeling feature
  • Support of streamed reality meshes
  • Create scalable mesh from terrain data
  • Volume calculation
Output and Interoperability
  • Multiresolution mesh (3MX, 3SM and Cesium 3D Tiles)
  • Bentley DGN (mesh element)
  • 3D CAD Neutral formats (OBJ, FBX)
  • KML export (mesh)
  • Esri I3S / I3P
  • Other 3D GIS formats (SpacEyes, LOD Tree, OSGB)
  • 3D PDF
  • AT result export (camera calibration and photo poses)
  • DEM / DSM generation
  • True orthophoto generation
  • Blockwise color equalization
  • Point cloud (LAS, LAZ, and POD)
  • Input data resolution texture mode
  • AT quality report
  • Animations (fly-through video generation)
  • QR code: 3D spatial registration of assets
Viewing
  • Free ContextCapture Viewer
  • Web viewing
Measurement and Analysis
  • Distances and positions
  • Volumes and surfaces
  • Input data resolution
  • Photo-navigation tool
Bentley CONNECT
  • Upload to ProjectWise ContextShare
  • Reality mesh streaming from ProjectWise ContextShare
  • Associate to CONNECT project
  • CONNECT Advisor
ProjectWise ContextShare

ProjectWise ContextShare

ProjectWise ContextShare is a cloud service for storing, managing, and sharing reality data. Better collaborate when you share visuals of the 3D reality mesh with your teams. ContextShare, a cloud-based service, extends Bentley’s ProjectWise connected data environment to securely manage, share, and stream reality meshes, and their input sources, across project teams and applications increasing team productivity and collaboration. It enables you to stream large amounts of reality modeling data without the need for high-end hardware or complex IT infrastructure. ContextShare is accessible through Bentley’s software applications, such as MicroStation, Bentley Descartes, and much more.  UAV companies, surveying, and engineering firms leveraging reality modeling in-house can quickly access their 3D reality meshes generated with the ContextCapture.

ProjectWise ContextShare Technical Capabilities

  • Annotate reality meshes with many types of information
    Improve collaboration and communication with the annotation feature. Add value to your reality meshes by creating annotations with information, such as a name, a description, and clickable links to online resources.
  • Collaborate in real time
    Capture and share real-time changes; collaborate on latest documents anywhere and anytime over mobile and desktop devices.
  • Connect project participants through an instant-on cloud service
    Use a secure cloud-based portal to work from any location and gain built-in data backup and recovery, without needing to install or maintain any software. Connect your entire supply chain with ease, using the secure platform to efficiently collaborate without opening your firewall.
  • Find documents quickly
    Access your latest files and favorite content based on your requirements for accurate document retrieval.
  • Employ trusted file sharing
    Eliminate the redundancy and confusion often caused by documents stored on multiple sites and with different applications.
  • Put files in a project context
    Create and manage file sharing with user, workgroup, and team folder structures.
Orbit 3DM Feature Extraction Standard

Orbit 3DM Feature Extraction Standard

Navigate through all types and sizes of mapping data in a full 3D view quickly with support for mobile, UAS, aerial oblique, indoor, and terrestrial mapping hardware systems and translate different device setups and specifications into a single user-friendly environment. Orbit 3DM Feature Extraction includes templates for each vehicle setup to save time when importing your mapping data, exporting views and features or creating accurate reports.

Learn more about Orbit 3DM Feature Extraction.

Orbit 3DM Feature Extraction Standard Technical Capabilities

  • Support all hardware systems
  • Import and optimize data
  • Full 3d viewing
  • Use and overlay vector data
  • Use raster data
  • Measure points, lines, areas
  • Copy to feature
  • Volumetric analysis
  • Contour lines
  • Profiles and cross sections
  • Client/server workforce enabled
  • Manage vector-data themes
  • Semi-automated point cloud measurement tools
  • Workflow and speed procedures
Bentley Pointools

Bentley Pointools

Visualize, manipulate, animate, and edit point clouds in a single workflow to decrease production time and increase accuracy. Pointools leverages a high-performance point-cloud engine, providing fast access to a high level of detail, layer-based editing and segmentation of data as well as professional images, animations, and clash detection features. Pointools allows you to view, animate, and edit point clouds in stand-alone workflows. It enables you to intuitively cleanup and prepare point clouds to make them easier to reuse in downstream applications. You can streamline scan-to-model workflows by importing point clouds from all major scanner manufacturers.

Learn more about Bentley Pointools.

Bentley Pointools Capabilities

Object Import
  • Point-cloud import from: ASCII, Pointools v1.0-1.1 POD, Terrascan BIN, LAS, LAZ, e57, Leica PTX and PTS, Faro FLS and FLW, Riegl 3DD, RPX, RDB and RSP, Optech IXF, Topcon CL3, and DeltaSphere 3000 RTPI
  • 3D model import from: 3DStudio 3DS, Lightwave LWO, Wavefront OBJ
  • Drawing import from: AutoCAD DXF and DWG, Esri shapefile
Project File Management
  • Stores current project settings, layers, saved views, tool states,and animation setup
  • Auto-save
Interface and Navigation
  • Dockable menus
  • Object Tree manager
  • Navigation modes: zoom, pan, free, examine, explore, and light direction
  • 3D Connexion mouse support
  • Walk mode navigation using arrow keys
General Visualization
  • Display settings: frame rate, overall and object bounding box, axis, grid, refresh methods, and camera transition
  • Environment settings: backdrop types and lighting
  • Stereo anaglyph, interlaced or generic quad-buffered display
  • Saved Views manager
Point-cloud Visualization
  • Cloud and layer shading
  • RGB (color) and intensity shading: intensity contrast and brightness control, and point lighting with material quality control
  • Plane shading: distance, offset, axis, fit to data, and edge
  • Dynamic LOD control
  • Clipbox (to isolate area of interest)
3D Model Visualization
  • Shaders: cel-shaded, outline (pen-line like effect), and lighting
  • 3D model display: render as points if needed, lighting, texturing, cel-shade, outline, double-sided, cull back face, and Z-order transparency
Drawing Visualization
  • Anti-aliased display
  • Per layer visibility control
Notes
  • Attachment of documents, images, movies, and other multimedia content to geographic locations
  • Notes management in folders
  • Display styles: color, box style, font type, style, and size
  • Display options: depth cue, show full note, show coordinates, collapse notes with distance, and position on viewport edge
  • Hyperlink support: Internet URL, file, saved views, animation frame, play animation, and run script Point-cloud Differencing and Clash Detection
  • Static and dynamic clash detection between objects
  • Detailed in-viewport clash region display
  • Point-cloud differencing
Point Editing
  • 2D point selection: rectangle, polygon fence, and plane
  • 3D point selection: cube brush, sphere brush, and point cloud from point
  • Selection operations: select all, deselect all, and invert
  • Selection constraints: by color, intensity, and grayscale
  • Point visibility operations: hide selected, unhide all, and invert visibility
  • Support for up to 128 point layers
  • Layer operations: add, remove, hide, show, switch layer mode (single or multi-layer), apply random layer colors, reset, setup LAS layer, point cloud per layer, layers from classification, layers from selected classification, and move points to layer
  • Layer manager
  • RGB painting: direct application via brush tools and layer-wide operations via layer filter
  • RGB color selector: current color, extract color from viewport, swatch (colors storing), color selector (based on hue, saturation, luminance), color channel slider, transparency, and nine blending modes
  • Color application using sphere or cuboid brush
  • Active layer filters: erase paint operations, fill points, reset RGB editing, brightness/contrast, hue/ saturation, and white balance
  • Stack-based editing (stores point-related operations and previous editing operations can be applied to other data)
Object Manipulation
  • Two in-viewport rotating and movement modes: Axis Arrow and Rotate rings
  • Local or world transform mode (keep object’s own coordinate system or apply model coordinate system)
  • Pick tool (click any object from viewport to select it)
  • Large object reference
Measurement Tools
  • Point measure (single point position)
  • Distance measure (point-to-point distance)
  • Measure log options: save to delimited ASCII file, clear, display settings (color, coordinate and distance units, and precision), and tag (code applied to next measurement)
Rendering Snapshots
  • Size options: image dimension and scale, output file size, and calculate print size
  • Selection of area of interest
  • Info options (for orthographic projection): minor and major ruler markings, or as grid, scale text, datum line, title, camera position and direction, and add logo or image
  • Output options: anti-aliasing level and file format (TIFF, JPG, EPIX)
Stereoscopic Viewing
  • Viewing via anaglyph glasses, auto-stereoscopic screens, or any quad-buffer-based stereo device
  • Display options: depth control and amount (of depth effect), and swap left/right cameras
Animations and Movies
  • Animation wizard (fly-through, orbit)
  • Timeline user interface, with animation controller/parameters and keying mode
  • Panning and zooming of the timeline
  • Undo/redo timeline operations
  • Animation options: scale animation and camera path import/export
  • Animation of 3D objects, camera, clip box, light, and settings
  • Object and camera transformation parenting
  • Object and camera motion scripting
  • Graph editor: fine-tuning of keyframe position, key value editing, and interpolation methods (stepped, linear, Catmull-Rom spline, and TCB spline)
  • Control of timeline settings
  • Image sequence, AVI, or MOV movie output
  • Stereo left and right output
  • Tiled output

Frequently Asked Questions

What is the Reality Modeling WorkSuite?

The Reality Modeling WorkSuite offers surveyors and engineers access to Bentley’s most popular reality modeling applications ContextCapture, Orbit 3DM Feature Extraction Standard, Bentley Pointools, and ProjectWise ContextShare, at a discounted price. Thousands of users worldwide trust Bentley’s reality modeling solutions to provide real-world digital context to their mapping, design, construction, inspection, and asset management projects.

How much does ContextCapture cost?

ContextCapture is included as a bundle with Virtuosity’s Reality Modeling WorkSuite and Reality Modeling WorkSuite Advanced. A practitioner license of Reality Modeling WorkSuite costs $3,392 USD and Reality Modeling WorkSuite Advanced costs $6,890 USD at Virtuosity.com. Prices vary per region. While there are various types of licensing available, a common choice is the 12-month practitioner license offered through Virtuosity, Bentley’s eCommerce store. When you purchase through Virtuosity you get a Virtuoso Subscription. This means you get the software and “Keys” (tokens) to redeem for customizable training, mentoring, and consulting services.

Input data
Can you mix photos from different sources at different resolutions? E.g. air photos with photos from the ground?

Yes. ContextCapture is the most versatile solution on the market, and automatically extracts details from any resolution photos. You can properly register using the control points in your datasets.

What about panoramas? Can they be used?

Yes. This is possible by making sure there is more than 60% overlap between 2 successive source photos from the camera used to take the panorama shots.

Is it possible to use 360 cameras, such as the NCtech iris360?

The most popular fisheye cameras (GoPro, DJI…) are supported and in the catalog of cameras. 360 cameras are not recommended for this kind of process as they come with too much distortion.

Is it possible to use RAW photos (14bits, 16bits, HDR?)

Yes. Most of the camera manufacturers’ RAW formats are supported, but currently only the 8-bit channel is used.

Possible to make 3D from video files?

Yes. ContextCapture accepts videos as an input in MP4/WMV/AVI/MOV/MPEG formats, and automatically extracts a frame according to a user-defined period.

Can it create models with images fully edited and modified in Photoshop/other programs or does it need the raw image files/info unedited?

No. Photo editing will confuse the software with regard to the estimation of the optical properties. However, masks can be used if a fixed area has to be removed from the photos.

Can I import calibration parameters for my camera?

Yes. You can import an OPT file, or add a camera to your database and input its specific calibration parameters, such as distortion parameters, principal point, and focal length.

Data acquisition
What is the recommended overlap between photos to achieve better accuracy?

We recommend that every part of the scene is captured in at least 3 neighboring photographs. The overlap must then be more than 60% in all directions.

What are the requirements of the camera / photos – calibrated, fixed lens etc.?

We recommend using a camera with a reasonable sensor size (in pixels) to reduce the number of photos, and with a fixed focal length. Adjusting the zoom– will confuse the software, as it will have to estimate optical properties for every photo instead of for the entire photogroup (a group of photos with the same optical characteristics).

Do you need surveyed ground control?

Ground Control Points (GCP) are not mandatory but highly recommended to accurately georeference the model, correct drift on corridor acquisitions, increase precision on the altitude, register different datasets in the same project, etc.

Should all photos have geotags, or can the software manage with be only one?

A few geotags should be enough, but most of the time, all photos acquired with a camera equipped with a GPS sensor, will have geotags in their EXIF attributes.

Are photo ID required to stitch together the photos?

There is no need for a structured acquisition if this is the question. Photos may then be shot in any order and will be processed regardless of their IDs. Anyway, having metadata like the Exterior Orientation (EO) and even the Interior Orientation (IO) for every photo will help the software during the Aerial Triangulation process (AT).

What is the workflow for flying a site to capture images? I’ve flown a few from about 30meters and wonder how to also capture the vertical faces of buildings. Can I fly a second flight path and incorporate those images into my model easily?

We offer a comprehensive acquisition guide that describes all the best practices to acquire photos for a specific purpose. It is possible to acquire verticals through a first flight, and then acquire oblique and add both to the same project.

Does ContextCapture have an additional application for flight planning? Specifically, a drone?

Not for now. We believe that our job is to provide the best processing software, and we cooperate with major UAV manufacturers who already have their own mission planning solutions.

How are you handling the obliquity in the image samples?

The software processes all images regardless of orientation, including oblique imagery, in the same way.

Does the camera mounted to drone need to have a known coordinate or geolocation in order to represent true scale and location?

There are several ways to scale and georeference a model: through geotags with the photos, ground control points, or by adding manual tie points in the photos (only for the scale in this case).

How does the software handle the background that is not relevant during the imagery acquisition?

Every static part of the scene that is sharp in appearance, and is not too reflective, will be accurately reconstructed. If a user wants to focus on a specific area, then the best is to create a Region of Interest (ROI) thanks to the dedicated UI.

3D mesh edition
Are the 3D meshes produced fully editable or do they work like fixed blocks?

ContextCapture produces meshes in various formats and structures. They may come as individual tiles at a particular resolution, in OBJ, FBX, Collada or STL formats, or as a multiresolution mesh, in 3SM, 3MX (Bentley products), OSGB (OpenSceneGraph), LoD Tree, I3S (Esri) and 3DTiles (web viewing) format. The mesh can be edited in OBJ (geometry+ texture) with a third-party application and then re-imported in ContextCapture to re-generate LoDs. An entire 3MX model, at any scale, can be loaded in MicroStation as a reference, and used for design and engineering processes.

Can landmarks inside the 3D mesh be selected and manipulated?

Reality meshes in 3Dtiles format can be manipulated, classified and annotated in MicroStation, and Bentley RealityData WebViewer

Accuracy
What is the accuracy of such models?

The global accuracy is about 1-2 pixels (resolution=projected size of a pixel on the scene, also called Ground Sampling Distance for aerial acquisitions) in a plane perpendicular to the acquisition, and 1-3 pixels along the main acquisition direction. Our recent benchmarks show the global accuracy is close to LiDAR, as long as you have a high enough pixel resolution.

Can accuracy be improved if the camera positions can be surveyed accurately?

Survey points help to accurately georeference the model in latitude/longitude/altitude and also avoid drift through large areas, and thus increase the global accuracy of a model.

Do you have a recommended list of cameras and types of photos to use to get a certain level of accuracy?

Every acquisition process may benefit from a specific camera system.  However, all cameras, from a mere smartphone to highly specialized aerial multidirectional camera systems, are supported by ContextCapture. What is important is the resolution (projected size of pixels on the scene), the sharpness of the photos, and a fixed focal length.

Are users able to add reflective dot targets to scenes to enhance accuracy?

Reflective dots may help the software on uniform areas (a uniform white wall, for instance), where computer vision and photogrammetry algorithms will struggle.  This is not required on areas containing enough level of texturing.

Output
Can 3MX and 3SM be exported to LumenRT?

Definitely! This will help you to enliven any captured context in minutes. 3SM will be a more optimized format for such a purpose.

Is there an option to export to CityGML?

Urban area models generated by ContextCapture are assimilated to LOD3 CityGM but do not come with any semantics.

Can these models be integrated with a 3D scanner? (Leica P40)

Georeferenced 3D models can be overlaid with laser scans.  This will provide users to get the best of both worlds, either to complete a LiDAR acquisition with photos, or to extract more details and increase precision for the same area.

Integration with Google Earth?

ContextCapture produces multiresolution meshes in KML format, which can be directly loaded into Google Earth

What GIS software are ContextCapture outputs compatible with?

OpenCities Map, Esri’s ArcGIS, SuperMap, and more generally, any 3D GIS or visualization software compatible with a multiresolution-tiled format (OpenCities Planner, Unity 3D, OpenSceneGraph, Eternix’ BlazeTerra, etc.).

Which Bentley software accepts the exported 3D mesh?

All V8i SS4, CONNECT and DGNDB platform compatible products will support the 3MX format. Descartes, Map, ABD, OpenRoads ConceptStation, etc.

Is there a list of compatible 3D printers?

ContextCapture can export an STL or OBJ format, widely accepted by 3D printers.

Do you have an idea of a file size? For example, if I had 200 pictures at 25Megapixels each.

This is about 150MB for a similar project. 100 times lighter than a colored LAS point cloud and 22 times lighter than POD.

Is it possible to export only a colored point cloud?

Yes. ContextCapture can produce models in LAS and POD point cloud formats.

Where can I get the list of 3rd party software that we can use after, for example, the water simulation etc.?

For communication purposes, LumenRT will do the job quite nicely. For more technical analysis, OpenFlow brand will be optimized.

Can Bentley’s design tools use the mesh as a surface or terrain, like in power inroads?

Definitely! OpenRoads ConceptStation loads a 3MX as a 3D base map. Descartes can also be used to extract a DTM from our digital surface model.

Does referencing 3mx files in MicroStation use the standard reference functionality, or does it use raster manager functionality and is this fully integrated in ProjectWise?

MicroStation loads the Spatial Reference System (SRS) used to produce the 3MX file and references the model accordingly. We are working on closer integration with ProjectWise.

How to combine points generated by scanner and photos?

In ContextCapture there is an “Adjust photos onto Pointcloud” feature that automatically runs alignment before 3D-reconstruction.

Is it possible to place created 3D models into different geographical zones?

The Spatial Reference System (SRS) can be selected when creating the project and in the viewer (after the production) from a library of 4,000+. This can be a user-defined referential system.

Web Viewer
How can I publish the models to my customers?

Bentley RealityData web-viewer is the most suited path to share with stakeholders. It will display 3DTiles hosted on ProjectWise ContextShare, allowing photo-navigation, annotation, and permission management. ContextCapture also offers a free web-compliant plugin web viewer. This allows you to publish any 3MX model, at any scale, on a web server, making it available to your users via a mere browser.

For the WebGL Viewer, is it possible to take into account the collision?

Clash detection can be done in MicroStation (on extracts of the mesh), or on point clouds in various solutions but not in a viewer, either web or desktop.

Processing and analysis
How automatic is the process of processing multiple photos?

This is fully automatic as far as the input datasets are suitable (overlap, sharpness, optical properties, etc.).

Regarding photo control does the software prepare an AT solution report.

Yes. There is a report at the end of the AT, containing the various RMS values as well as processing parameters. Quality metrics are also viewable in the 3DView.

When you say “Production time” is that clock time? Or CPU processing time?

The production time is considered as clock time. The average observed production speed is about 15 Gpix per Engine per day.

How do you georeference the mesh in ContextCapture?

Either through control points or geotags with the photos.

Is it possible to georeference model by GPS points captured by a camera?

Yes. Indeed, ContextCapture reads the EXIF parameters coming with the photos and extracts the geotags when present, as well as other camera properties, if any.

How do you add control to photos?

Through a dedicated UI in the software. You can also load them through a text file and then identify them in your photos.

How do you scale the contents?

Either by georeferencing the model, or by adding manual tie points with a distance value in the editor.

Is there a function to manually classify the 3D point cloud for 3rd party survey?

Yes, Bentley Descartes and/or Pointools are dedicated to this type of application.

Do you have object classification tools?

ContextCapture Insights is able to perform such operations. This is made possible by properly training detectors.

Do you have direct volume calculations in ContextCapture?

Yes, using the 3D viewer which will be included with ContextCapture. Users can measure coordinates, distances, height differences, areas, and volumes. In the web viewer, only coordinates, distances, and height differences can be measured.

How do you calculate volumes?

Volumes are calculated by refencing either a mean plane created through the georeferenced selection polygon or to a custom plane at a specific height.

Are there size limits in terms of MB/GB & poly count?

No. This is what makes ContextCapture so unique. City or bridge models can easily reach dozens of GB on a hard disk and be streamed through a web or local server, thanks to the multiresolution architecture and the optimization of the mesh.

Are you going to simplify the process for producing orthophoto according to different axes?

This is available in ContextCapture Editor.

What are the greatest challenges users may encounter in the production of 3D models with ContextCapture?

Photo acquisition! The process is truly straightforward when the photos are appropriate: resolution, sharpness, overlap.

Is there a method for using this software inside?

The software applies to any photo dataset, aerial, ground, outdoor, indoor, so long as the objects in the scene are static (if moving too much they will be automatically removed), highly reflective or too uniform. The best practice for shooting photos indoors, is to walk sideways, back to a wall and shoot photos in multiple directions to the front (slightly upwards, downwards, rightwards, leftwards). The acquisition guide provides more information on this procedure.

Systems Requirements

 

ContextCapture
Minimum Hardware

At least 8 GB of RAM and NVIDIA or AMD graphics card, or Intel integrated graphics processor compatible with OpenGL 3.2 with
at least 1 GB of dedicated memory.

Recommended Hardware

Microsoft Windows 7/8/10
Professional 64-bit running on a PC with at least 64 GB of RAM, an Intel I9, 4+ Cores, 4.0+GHz CPU. Hyper-threading should be enabled. Nvidia GeForce RTX2080/2080Ti GPU. Data should preferably be stored on fast storage.

Memory

4 GB minimum

Hard Disk

2 GB free disk space.

Video

NVIDIA or AMD graphics card, or Intel-integrated graphics processor compatible with OpenGL 3.2.

Screen Resolution

1024 x 768 or higher

Orbit 3DM Feature Extraction Standard
Operating System

64 bit, Microsoft Windows 7 or higher

Memory

4 GB to 8 GB

Disk space

500 MB

Screen Resolution

1920 x 1080 or higher

Bentley Pointools
Processor

Intel Pentium-based or AMD Athlon-based processor 2.0 GHz or greater

Operating System
  • Windows 10 (64 bit)
  • Windows 8.1 (64 bit)
  • Windows 8 (64 bit)
  • Windows 7 (64 bit)
Memory
  • 4 GB recommended
  • NVidia or ATI (AMD) graphics card
Disk Space

1.5 GB free disk space for installation

Input Device
  • 3 button mouse
  • 1080×768 display resolution, 1920×1080 display resolution with true color recommended

Check out Webinars

View Now

Buy Reality Modeling WorkSuite