Raw Work Flows ( Changes to White Balance and Exposure)

Comments

33 comments

  • Avatar
    Jonathan Tanant

    Thanks Steven for bringing the subject on the table, I am very interested too ! 

    Mainly for scenes with complex lighting and huge contrast, we keep more informations on the raw files (if RC could handle these informations)... I posted this somehow related thread : https://support.capturingreality.com/hc/en-us/community/posts/360009453811-Texturing-exposure-EXIF-HDR-

    I had some issues with DNGs (because of some bad Windows DNG driver support, on some of my machines my DNG files are downscaled) and now I am trying to colorcheck and develop my raw in RawTherapee and then export JPGs to RC. But this is not a so easy road...

  • Avatar
    Götz Echtenacher

    Afaik RC only uses the embedded JPG in the RAW, so it should be handled with care. Also, as mentioned, it greatly depends on the windows driver (as Jponathan mentioned) since that is how RC can access RAWs.

    How do both of you handle the white balance issue between different cameras? I find that quite challenging to say the least, even though I think RC is doing a great job if the differences stay within limits. BTW, I also use Rawtherapee - despite much criticism, after thorough testing of many different developers I am convinced that it is one of the best. At least for my X-Trans III.

    To be honest, I am more and more going back to using JPG directly from the camera. In terms of geometry, the difference is negligible (with quality lenses) despite distortion correction. And if the contrast in the scene isn't extreme, then there isn't much need for me to work much with RAW processing, plus it's a huge time-saver. Cranking up the shadows won't interfere much with the features in my opinion and it makes a huge difference for alignment.

     

  • Avatar
    Jonathan Tanant

    I try to stick with a fixed white balance. I avoid at all cost auto white balance. So usually for outdoor, daylight (sunny) or cloudy setting, depending on the lighting... and I am trying to use a colorchecker chart : at each of my shooting runs I try to get the colorchecker passport on my pictures, which I can then use in the app to build a profile (out of the DNG) and then use this profile in RawTherapee... But the gain in quality is really not obvious, as I think this is much harder to do a precise colorimetric work on a set of 1000 pictures for photogrammetry than this is for a portrait facing one direction... so I am still experimenting...

    And yes, I agree, I also tend to use the JPG directly from the cam... much faster, much lighter to archive (data size is 4 times less), ...

    The only exception is when I work with my Mavic pro drone footage : the JPGs are so bad, so heavily smoothed that the JPGs are really a no go - DNGs are much much better.

     

     

  • Avatar
    Götz Echtenacher

    Ah, good to hear - the JPG community seems to be growing!  :-)  Probably that's more and more possible because the in-camera processing gets better and better.

    I heard that a simple 80% gray color check is much quicker bit sufficient for most cases. But the processing time would still be similar. And there are still changes in the light over the course of the day. But I gather that'S the only good way to handle differences between cameras? They can be quite obvious, even with the same white balance setting or even RAW developement. What app are you using for those calibration profiles? It cannot be done within Rawtherapee?

  • Avatar
    Jonathan Tanant

    I have to use the X-Rite app "ColorChecker passport", that gets one DNG with the Color checker on it and compute a DCP profile, that I can then import in RawTherapee and apply to my pictures. But this makes things so complicated...

    So what you are doing is you set the white balance in manual on your gray card ? no issues between different cameras and sensors ?

     

  • Avatar
    Götz Echtenacher

    That's what I heard - didn't do it myself yet.

    So far, I got away with a fixed WB or even auto at times...

  • Avatar
    Steven Smith

    So using raw files has no benefit and might even be slightly worst if the jpg previews of said camera that are built into the raw suck, like out of the mavic? 

    Whats the point of supporting raw files? Only to give a Camera jpg preview, and not gain any benefit of all that extra data and drive space?

    The only way I see this working then is to make adjustments and then rebuild all DNG's with full sized previews built-in and that at least saves a little drive space. At least I don't have 2 copies ( 1 DNG and 1 JPG ) of the same pic. There is no time savings here then.

    Jonathan Tanant I also use a color checker. The short answer to complex lighting is to use a dual lum. profile. And if you don't have time to do that, then a generic dual lum profile of daylight and tungsten is recommended as they as far apart in white balance.

  • Avatar
    Jonathan Tanant

    On the Mavic, this is not the preview in the raws that sucks, this is the embedded JPG processing itself that sucks. 

    Yes, I tried with dual lum profile, but this just does not work for complex subjects, or I would have to reshoot a color checker every 100 pictures and that would become a nightmare because I would spend too much time editing the raws... 

    Take for exemple a building - with the sun, at say 4 or 5PM that gives you at least 3 different lightings (if not more) for the 4 faces, from direct sunlight (exposed side) to indirect and total shade (opposite side)... So for this kind of subject we could shoot 3 sets with 3 color calibrations, but of course you have issues at overlaps....

    And of course sometimes you want these changes, because lighting is part of the subject : you can not always try to get the reflected ideal color of your subject only (even if we could).

    Ideally I would see that as part of the workflow in RC (I could maybe turn this into a feature request) :

    -work with the RAWs, so no information lost (12 bits-14 bits + EXIF that gives RC informations about exposure...).

    -align the pictures.

    -for outdoor given the time of day / GPS position, RC can compute a light model of where the sun is, the user would just have to say how cloudy it was that day.

    -at texturing, all these informations are used to compute texturing given the user's choice : 

        -remove the illuminant (so try to balance every picture) or not

        -get a constant exposure (so dark areas would be maybe all dark and bright areas all white) or normalise the exposure (what a camera is usually doing by setting exposure)

    Really, this would be so great to have this handled in RC !!!!!

     

     

     

     

  • Avatar
    Götz Echtenacher

    Hey Steven, don't take my word for it though. When I tried with direct RAW, that's just how I interpreted it. It might be different for others. Plus, what just struck me and I didn't consider at the time, it could be that RC uses the JPG only for the display and calculates with the RAW. That seems to be a bit error-prone though.

    I also would be careful in relying on the in-RAW JPG since you don't have much influence over the settings.

    EDIT: I just noticed at the time that when I import RAWs into RC, it looks eactly like the camera-JPG and not at all like the unprocessed RAW. I just realized that it might also be that the same rendering settings are used, since they might be stored in the EXIF. But then one would loose a great deal of the advantages of RAW processing...

  • Avatar
    Mike Simone

    If im working at all with my DJI Phantom 4 Advanced, the DNG files do not at all work well with basic windows drivers or RC. I bring them all into photoshop and convert them into tifs. The files are huge but I keep all the raw information that RC can use at that point. 

    But my workflow right now is to bring everything into photoshop and mass edit the exposure,, highlights shadows.. etc then convert them all into TIF.

  • Avatar
    Steven Smith

    Jonathan Tanant,

         I just read your other post. I haven't tested this but I believe that the way you are wanting to do HDR and have consistency in RC ( how I would do it anyway ) would be to spot meter the brightest part, ie out the window and expose it at about +1 to +2 stops over exposed. Take note of shutter speed and write it down.Then do the same with the darkest part of the subject but at like -2 stops, again depending on taste. 

         Say, the brightest parts the shutter was 1/500 and the darkest part was 1/4 sec.s. you then count the stops difference. Every half or doubling is one stop. So 

    1) 1/500

    2) 1/250

    3) 1/125

    4) 1/60

    5) 1/30

    6) 1/15

    7) 1/8

    8) 1/4

    9) 1/2

    You can then set your camera to bracket 9 shots from the middle (1/30) for every shot. If you camera can't bracket that many you have to do it by hand. Use lightroom to generate hdr for each bracket. Lightroom will spit out a raw dng hrd image that you can then use. The trick is let the darkest parts be dark and the brightest parts be almost blown out. It looks more natural. 

    I see that your a unity dev, so I don't know how familiar you are with cameras, so just ask any questions if needed.

  • Avatar
    Steven Smith

    Where is WishGranter? Hearing the it from the RC staff would put my mind at ease. Testing is great, but is a lot of guess work. 

    What are the benefits of working straight from raw files, if any?

    What are the draw backs? 

    What are the limitations of RC reading a raw file. 

    If it varies from codec to codec generalize it. Maybe to just talking about DNG's.

  • Avatar
    Jonathan Tanant

    Thanks Steven for the bracketing method explanations.

    Actually I am sure this would be very high quality, but I am afraid this would be really slow, because of the tripod  (when I have sufficient light I am working handheld to shoot faster) and because of the processing for each picture. I can't imagine doing that for each aligned camera in RC (we are talking about thousands of cameras). 

    What I had in mind is more using the raw (this is not as good as bracketing but better than 8 bits JPGs) and use the informations they contain (EXIF).

    But thanks, I will try on a simple subject.

     

  • Avatar
    Steven Smith

    Yeah tell me about it! Its a nightmare. I just did a small screw with focus stacking. Something like 200 images stacked turned out to be 27 final images to then run through RC. A lot of work for just 27 images. Or doing an hdr 360 pano in lightroom, 50 hrd's processed as described above then stitched into a dng pano in light room.  Its a lot of work for just one image, or in RC, one model. Sometimes I just want to push the limits, and every time I do I learn.

  • Avatar
    Jonathan Tanant

    Yes, I saw your post about the screw !  Great result !

  • Avatar
    Steven Smith

    Do you want or need ambient lighting? In photography ( in studio conditions ) you can control the mix of ambient vs flash via shutter and aperture. Shutter controls ambient light and aperture controls flash intensity for the most part. You kill ambient light with a high shutter.  As long as you can get even exposure and no shadows with a flash setup your results would be the same as de-lighting, or better since it's not a software approximation. This would also not have mixed white balance.

    For a big room, using 1 or 2 flashes bouncing off the ceiling would be the cheapest and easiest way to do it. Use your color checker when doing this as the ceiling or wall will cause color casts, easily corrected though. You wouldn't move the flashes and you could shoot hand held. For smaller subjects a light box or ring light ( ring flash ) should work.

  • Avatar
    Götz Echtenacher

    The ceiling-bouncing flashes is not a bad idea, I like it!

    Wishgranter is pretty much gone from this forum, so I wouldn't count on him too much (prove me wrong Milos! :-)

    Also the staff usually doesn't contribute too much to such discussions.

    It really is about experimenting and finding out what works best for yourself. What do you think where Wishgranter got his amazing knowledge from?  ;-)

    So roll up the sleeves and click those buttons!

    Jonathan, I think 16 bit TIF is supported, so that would be the way to go there. Unfortunately, it is only for input, output is still only 8 bit, afaik. So not really much gain there imho. Did I say this in this thread or another, anyway, I think it helps a lot to raise the shadows (blacks) like crazy to give RC more to work with. It can also result in something coming close (-ish) to a delighted image.

  • Avatar
    Steven Smith

    So for sure RC is only using the JPG previews of DNG files. Would explain how dng files are processed just as fast as jpg. Changes to white balance and exposure didn't show in RC even after clicking "save metadata to file" in lightroom, but after clicking "update DNG preview & metadata" did RC reflect those changes in the 2d view and in the point cloud after an alignment. To be thorough, every test was started as a new project with the cache cleared each time. 

  • Avatar
    Steven Smith

    Yep, after rebuilding the dng's with no preview in lightroom, it seems like there is some absolute minimum like 128x128 preview. And that is all RC can read from the 42mp DNG. So unless you know that the jpg previews build into your raw files are full res and with the processing you want (noise reduction, sharping, ect..) your better off exporting full quality jpgs. I don't think I'll be using tiffs anymore as there is no quality difference and the downside is speeds much slower and massive files, 4 times lager than the original raw files. I have found that compressing them either zip or lzw only increases the already slow tiff RC processing time, and the files are still much larger than the original raws. 

  • Avatar
    Götz Echtenacher

    This "update mpreview and metadata", was that in RC or your developer?

    Since it eventually DOES show your changes, are you still sure it's the JPG or could it really be the RAW data now?

    With my Fuji RAWs, it doesn''t seem to work at all, even though I have the codec installed - it just stays at the first hugely pixelated previews and the blue dots on top of the 2D windows never stop cycling...

  • Avatar
    Steven Smith

    I did some testing. You probably want to watch this at 2 times play back.

    https://youtu.be/Cynxkw9-E8Q

    I'll have a part 2 tomorrow 

  • Avatar
    Steven Smith

    Johnathan BTW about the mixed white balance. You could just do it the way most movies do. What I have come to do and what most "cinematic" productions do is just use a white balance of 5500 K. As a photographer  and videographer I use this all the time and it looks good and natural to the eye.

    Here is an example of extreme mixed lighting all shot at 5500K for a church. Notice how the drone and slow motion footage don't throw you off even though its not properly white balanced at all.
    https://youtu.be/POgwraWx5aU?t=1m12s

     

    (Edit: To elabrorate on this, the way it was explained to me was "You can mess around on trying to get it perfect in every frame and it will NEVER look right, or you can use daylight WB that your eyes are most used to and let them do the correction.)

  • Avatar
    Steven Smith

    Raw workflow testing part2

    https://youtu.be/M_L9BmohRB8

    Summary:

    Built in previews in raw file straight from camera are bad ( at the very least not ideal )

    Raw codecs are for generating thumbnails and giving a quick preview of the file, but in no way actually read the raw data. It's just a jpg baked into the raw in most cases, for the purpose of a quick preview. ( This is the method Reality Capture is using to read said raw files )

    Lightroom converted DNG's are essentially the same thing as a full quality jpg if you baked in full quality jpgs. Watch part one for details.

    Tiffs are probably not worth the time or space they take up. I might test PNG as they are technically lossless, but not today I'm tired of recording.

  • Avatar
    Jonathan Tanant

    Thanks Steven for these videos, we now have more informations about how RC handles the raw files - so I even suppose that RC is actually relying on the installed driver : it may be possible that we have different results with another set of codec installed. But to be sure, maybe this is better to stick to JPGs (or TIFFs, I know they are huge, but maybe with fast drives it could speed up the image readings ?) generated outside RC.

    About PNGs, according to my tests on features-rich images (the images that we love in photogrammetry), you do not save much space (comparing to TIFFs), so I don't think this is worth it.

    About the 5500K white balance for all shoots, I eventually came to the same conclusion (my conclusion was : instead of measuring white balance or switching between sunny, cloudy and shadows preset (and make editing complicated after), let's stay on the middle preset (cloudy) that should be a all-around preset for most cases).  Actually I think that depending on the body, 5500k is in between "sunny" and "cloudy"...  I will test 5500K for my next capture, thanks for the information ! Of course I think that does not prevent us to get some color checkers, just in case...

    We could also try to capture the color temperature coming / white balance from all directions with a sphere diffuse gray probe I guess for large environments captures. Anybody has some experience of this ? This could then be used, after a first alignment, to process the raw pictures (using the xmp datas that contain images orientations).... how does it sounds ?

     

     

     

     

  • Avatar
    Jonathan Tanant

    And about the flash, I did some tests with a ring flash but found that it kills too much of the lighting (of course that is what we want) of the scene. So for now I am sticking to "natural" light shootings. But I will test the ceiling bouncing flash.

    Götz, I tried the 16 bits tiffs, but there is not really any high dynamics here, I think mostly because the input images are only processed as 8 bits without taking the exposure (EXIF) into account. So yes, maybe there is some 8 bits blending to 16 bits result that make it gain a little dynamic, but no real HDR here I am afraid. Can someone from Capturing Reality confirm ?

    Here again, if Capturing Reality has no time to investigate this area of high dynamic range texturing, we could maybe use the XMP metadata and the rectified images to texture the unwraped model outside of RC... I don't know how complicated this would be... I think I will give it a try.

     

  • Avatar
    Steven Smith

    Jonathan,

    I believe that you are asking about an incident light meter. I don't know if it you give you the results your looking for as it would only measure at one spot ( it would give you one temp or the other, or somewhere in the middle ) there is something called the flambient ( flash ambient ) method I think might work better for you. You shoot 2 frames, one with flash and a high shutter speed (to kill all ambient light and to have the "most" correct color) then you the shoot a pure ambient exposure. In photoshop you open them in 2 layers and change the blending option, depending one the one on top, to color ( the flash one)  or luminance  (from the ambient one). This will give you the ambient glows of lights and windows, shadows, streams of light coming through a window, while having the whole scene color balanced. I use this in realestate photography and learned it from youtube.

    https://www.youtube.com/results?search_query=flambient

  • Avatar
    Steven Smith

    Can we PM on this platform? I don't see a way.

    Jonathan,

    or anyone with experience

    I have some questions about 3d modeling. My profession is photography, and my hobby is computers.  When it comes to modeling software and game engines like unity and blender I'm overwhelmed. I would love to be able fix models from this much like I would touch up photos in Photoshop but cant find good educational material ( really just don't know where to start ). the most I've done is bring a model into VR by following 

    https://developer.valvesoftware.com/wiki/SteamVR/Environments/Photogrammetry#Preparing_your_model_for_SteamVR_Home_.28Optional.29

    I cant afford zbrush at $900 or Maya for $150/month or $1500/Yr. 

    I don't want to develop games, just touch up my models.

    Can you recommend any software or tutorials?

  • Avatar
    Jonathan Tanant

    No this is not an incident light meter, this is a gray diffuse sphere probe that you shoot and that gives you the incident light going from all the directions around the sphere (except just behind but if your lens is long enough, this will be a small part). So the idea would be to shoot this probe to get the incident color temperature depending on the direction, and then use this to correct the pictures, depending on the orientation of subject and/or camera.

    I would recommend MeshLab (open source and free), Blender (open source and free) and Unity3D engine (there is a free license). MeshLab works very well with RC, and Blender is hard to learn but works very well for baking.

  • Avatar
    Götz Echtenacher

    Hey Steven,

    thank you very much for those two awesome videos!

    I would say you proved it beyond reasonable doubt...  :-)

  • Avatar
    Steven Smith

    Thanks! I'm glad if they helped anyone. I'm not used to being in front of the camera/mic myself and find it uncomfortable to say the least. Maybe I should do a whole series on RC. I don't expect or want to reach a large audience, but I think it would be a good way to have a discussion on how RC works, since the reading material is very limited to the most generic instructions. Just like with these videos I forced myself to learn I would also learn from future projects.

Please sign in to leave a comment.