Ability to mark flat surfaces as such
I would like to see an option that lets us define either by single click or by drawing a polygon flat surfaces on one or more source images (and link defined surfaces together (assign different colors maybe?) to mark them as "same surface").
I think that would help RC to return better surfaces for those cases where the lack of features results in either no mesh or a "crater landscape".
It certainly would improve indoor scans by quite a lot.
Mayhaps even allow designations such as "wall", "floor" and "ceiling" so RC can tell which way is down :)
I don't think any of the competitors has such a feature. At least I don't know any that do...
-
This is a feature request with high prio for me. It makes all the difference in architectural visualisations, for which I want to use RC. These blobbery surfaces really don't look good, as if the house was burnt out, or bombed.
I tried the Simplify Tool, but it makes the corners still not straight, and it made an incorrect texture map, and it still wasn't completely flat. And it was a lot of hassle to place a cube around the surface.
-
It would have to be 'nearly flat', with maybe a % no or to define 'how flat' - from 0% = truly flat, up to say 15% = some definition of waviness. RC might call it a 'weighting', as elsewhere, so RC knows how rigidly to act upon the 'this is flat' instruction. Because, in buildings at least, surfaces are rarely truly flat in reality.
The 'by single click' would be a challenge to impliment; drawing polygons would be a huge task, multiple flat surfaces on multiple photos. I guess it would have to be done within RC after loading, and would not be possible to transmit and save the markings back to stroage in Windows Explorer folders?
What about perhaps multiple complex objects, like wall-light fitting, on an otherwise flat featureless expanse?
-
Pjotr, this would require the ability within RC to actively change the mesh geometry, which afaik is not possible at the minute. Doing this within the 2D imagery would have the advantage that the general workflow is already established, the points would be like control points. This is by the way older photogrammetry software worked. All that would be neccessary is to establish constraints between those points, a bit like it is already possible to define a distance between two CPs.
-
Last year I wrote an article in ArchDaily and more detailed tutorial about the use of RC for architecture visualization, using a low-cost drone to capture the images. Architects always like to show their design in context, i.e. they like to show their design in its future setting. Photogrammetry can play an important role there. The articles received 50k views.
Now, I would like to write an update of the article, addressing some points that some of the readers brought up, or that I ran into myself as limitation. The "blubbery" surfaces of the walls and roofs is seen as one of the limitations for a wider adoption of photogrammetry for architecture visualisation. In my post above I wrote a suggestion for improvement, requiring manual work. Maybe the algorithms have improved now to a level that the 3D mesh can be improved automatically, to identify flat surfaces and replace the blubbery surface with a flat surface? Or are there 3rd-party solutions which can do that cost-effectively?
Any other suggestion for improvement of the article & tutorial?
Thanks,
Pjotr -
This ability would be a killer feature, much used in buildings photogrammetry. Especially if AI/automated (subject to human yes/no/tweak).
What about a blob (like a wall light fitting) in the middle of a flat area? Would it be difficult to make the algorithm 'blind' to such perturbances? To reclassify the surrounding surface as 'flat' but leave the blob present in the middle of it?
Please sign in to leave a comment.
Comments
7 comments