• Hi UnknownTiredMan, how did you computed m parameters? Did you used the previous equations?

These equations are for computing 2D image coordinates, why did you get there black areas?

• I compute m parameters next way:

1. Convert to homogeneous coordinates using pixel transformations and K-matrix (The reverse formula to the one I described below).
2. Compensable browns distortion (OpenCV implementation).
3. Convert to pixel coordinates with the formula.
`scale = max(width, height)mx = (px * focal / 35 + ppU) * scale + width / 2my = (py * focal / 35 + ppV) * scale + height / 2# focal,ppU,ppV - received from .XMP`

Thus, I get a matrix of points on which I calculate an undistorted image. But as a result, I get a non-matching image compared to RC. And with positive radial distortion, I get an encircling black area on image.

Example: • Hi, is it possible to share a whole your script?

Is it possible, that there wasn't distortion over the image, as it is created digitally?

• 1) Yeah. Below is a Python script. I used OpenCV to transform image by coordinates.

`import numpy as npimport cv2 as cv2def toHomo(mx, my, width, height, focal, ppU, ppV):scale = max(width, height)px = ((mx - width / 2) / scale - ppU) * 35.0 / focalpy = ((my - height / 2) / scale - ppV) * 35.0 / focalreturn px, pydef fromHomo(px, py, width, height, focal, ppU, ppV):scale = max(width, height)mx = (px * focal / 35 + ppU) * scale + width / 2my = (py * focal / 35 + ppV) * scale + height / 2return mx, mydef brownDistortion(x0, y0, k1, k2, k3, p1, p2):x2 = x0 * x0y2 = y0 * y0xy = x0 * y0r2 = x2 + y2k = 1 + k1 * r2 + k2 * r2 * r2 + k3 * r2 * r2 * r2tx = p1 * (r2 + x2) + 2 * p2 * xyty = p2 * (r2 + y2) + 2 * p1 * xyreturn x0 * k + tx, y0 * k + tyoriginal = cv2.imread("image.jpg")#data from .XMPheight = len(original)width = len(original)focal = 60scale = max(width, height)#order k1 k2 p2 p1 k3distKoeff = np.array([0.1, 0.5, 0, 0, 1])pp = [0.001, 0.002]mapX = np.zeros((height, width), np.float32)mapY = np.zeros((height, width), np.float32)for row in range(0, height):for col in range(0, width):mx, my = toHomo(col + 0.5, row + 0.5, width, height,focal, pp, pp)mx, my = brownDistortion(mx,my,distKoeff, distKoeff,distKoeff,distKoeff, distKoeff)px, py = fromHomo(mx, my,width, height, focal,pp,pp)mapX[row][col] = px - 0.5mapY[row][col] = py - 0.5result = cv2.remap(original,mapX,mapY, cv2.INTER_CUBIC)cv2.imwrite("result.jpg",result)`

2) Yeah this image created digitally, but i added a real camera distortion with a script.

• Hi, what I noticed, that you are using this:

`focal / 35`

According to our equations, you should use 36.

Also, in Brown there are some differences:

tx = p1 * (r2 + x2) + 2 * p2 * xy

tx = t1*(r^2 + 2*xx^2) + 2*t2*xx*xy

• Yeah thanks. I have corrected the script. But it didn't solve my problem. The black encircling area still remains.

• Hi, sorry for my late answer.

Here you can find c# code with comments how to undistort the image: https://we.tl/t-wiF8mOyZlc

For undistort you need to computer the edge points and set, which image part is interesting for you (more about this you can find in the application help under Undistorted images)

• Hi, thanks for sharing the code.

I looked throw it. It pretty similar with my script, except division and perspective distortion. But the last part is interesting to me. How do you compute the edge points? As i understood from help, there are three types of edges (inner, outer boundary and in between), and it could be computed from edge points with undistortion function.

Do you use openCv formula for this?

• I am not sure how it is computed internally, but there is an option in RealityCapture to export undistorted image. Then you can find the boundary values from this image and use them in the script. 