Skip to main content

FAQ

Questions and Answers about the Model

Q: How long did it take to make the model?

A: About one year. Four months for data collection, four months for alignment and photo edits, and about four months for final model processing (mesh and texture generation) and cleaning the model up.

Q: Why do some parts of the model appear blurred, distorted, or to have holes?

A: Photogrammetry modeling has some limitations due to how the process works:

First, modeling process assumes that all things captured in the photos are standing still. If things move between overlapping images (such as leaves and branches on a tree, or people and vehicles) then the model may not capture them or blur them. Objects that are too small or intricate such as leaves, branches, light or fence posts may also not be modeled or only partially modeled because there is too little information/data when photos are taken from too far away or without enough overlap/angles.

Second, the photogrammetry process creates 3D point clouds based off of finding similar pixel patterns in multiple offset images that represent the same point on an object. When reflective or transparent surfaces are imaged, false points can be made and reflections recorded so that objects such as windows, shiny metal or plastics, and water often don't turn out well.

On that same note, areas with no variety in texture or lacking in different colors (such as the canopy above the Miller Baseball and Softball park which is all white and smooth) often have holes because the processing software could not find unique points or pixel patterns in the image to grab onto and use to make the model.

Last of all, some areas were not photographed. When an area that is modeled and one that is not met up, the model will try to fill in the gap. This often makes the model appear to be melting or distorted.

Q Why did you need ground photos as well as drone photos?

A: Most photogrammetry models of buildings or large areas have traditionally only used drone photos but lack detail when looking at walls, overhangs, entryways or covered areas. Using ground photos along with drone photos allowed us to capture detail in these areas and allowed us to use higher quality cameras because of drone weight limitations. This especially helps when working in VR from a first person ground point of view.

Q If you took over 100,000 photos, why did you only use 84,000 of the photos to make the model?

A: There are three main reasons photos didn't make it to the final model:
1. Photos were too blurry or bad quality
2. Photos were redundant and didn't add more data to the model
3. Photos did not align (or misaligned) in the software because of a lack in photo overlap/insufficient data.

Q: Why do some objects (like cars or people) look like they’re cut in half or missing parts?

A: Photogrammetry modeling assumes object stay still as they are being photographed. If they move during while being photographed, the software will not be able to find where the object belongs and will either exclude it, or partially model it if some photographs were taken while it was standing still. That is why people are not present or fully represented in the model. They were always moving. Vehicles are sometimes partially modeled because they were still at first, but moved at some point in the process and the software tried to model both the vehicle and what was underneath it.

Q: Why did you complete this project with cameras and SFM instead of using laser scanners?

A: Cost and research focus. Professional laser (LIDAR) scanners that can create similar models to this one come with a steep price tag and were not available to the group at the time of modeling. Using photogrammetry only required standard cameras and part of the purpose of this project was to stretch the boundaries of what photogrammetry and Structure from Motion modeling can do.

Q: Could this technology and approach potentially be scaled up to an entire community or town? What benefits would that potentially offer?

A: Of course. As data gets bigger, so does time, storage, and complexity, but with a big enough budget, powerful computers, lots of storage, and time, this process could be applied to any number of large projects. Such models could serve to preserve a snapshot in history, prepare for potential disasters, facilitate urban and civil engineering planning, and a number of other uses. We are just scratching the surface on what these kinds of models can be used for.

Q: If I’m interested in learning and applying this technology, which career path should I choose to study?

A: The group that built this model resides in the Civil Engineering department at BYU, though this kind of modeling is something that is recently being incorporated into the civil engineering field and education. Geomatics, Surveying, Remote sensing, and Geography are also areas that often incorporate this kind of modeling in their fields.

Q: I’m interested in using the BYU Campus Model for my own project or research! How do I obtain permission to use it and get a copy of the model?

A: Please follow this link to fill out a forum and someone will get back to you soon with a response!