-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set of measured points to known coordinates #30
Comments
Hmm, I'm not sure I understand this question. If you want to go from two-dimensional image coordinates into three dimensional world coordinates, you'll need a depth map to accompany your image. Without a depth map, each pixel corresponds to a ray in three dimensions. Put another way, |
I'll try to explain myself a little better: If I have a two dimensional grid (x & y), and I have the dimensions of this grid (each cell's width and height), then how would I use that information to create a In my specific case, I have an image of a checkerboard, and I want to transform locations in the image (that can only exist within the plane of the checkerboard) to real-world 2D coordinates. |
OK, I now know what I'm looking for in Matlab at least: |
Oh, ok, I understand what you're going for now. So you have two grids of points trans(pos1[i,j]) == pos2[i,j] There's nothing in the package which does this yet, though I'd say this is more of a problem for something like the |
Sorry to bring this up again, but: |
Ah, yes, CoordinateTransformations.jl is definitely designed to be one piece of the puzzle for people doing calibrations, SLAM, etc. (This was precisely the kind of problem we were thinking of when developing it). The way I imagine it to be used: Write a cost function describing how well your images match, and use Julia's autodifferentiation packages to optimize the calibration parameters (using your own optimizer or one of Julia's higher level ones). (Another tip: use |
Wah! What is that..? The only two ways I know of are:
What you describe sounds super cool. Any chance you could spin up a MWE for how that would work? (about the tip: Yes! Thank you!) |
Just a small step towards a solution: using CoordinateTransformations
function CoordinateTransformations.AffineMap(ps::Vector{Pair{T,R}}) where {T, R}
X = vcat(hcat(last.(ps)...), ones(1,3))'
Y = hcat(first.(ps)...)'
c = (X \ Y)'
A = c[:, 1:2]
b = c[:, 3]
AffineMap(A, b)
end
fixed_points = [[0,0], [0,50], [130,50]]
moving_points = [rand(2) for _ in 1:3]
f = AffineMap(Pair.(fixed_points, moving_points))
@assert fixed_points ≈ f.(moving_points) But I feel this can be generalized to n dimensions and perhaps optimized... |
Given a set of point pairs `from => to`, compute the affine transformation that best reproduces the observed mapping. Closes #30
Given sets of point pairs `from_points => to_points`, compute the affine transformation that best reproduces the observed mapping. Closes #30
I might have missed how to do this (and am glad to add this to the documentation), but how would one get a transformation from a (large) set of locations, say on an image, and the corresponding real-world coordinates?
The text was updated successfully, but these errors were encountered: