Saturday, December 3, 2011

F3F Base Detection System: two bases are better

I've done some more development of the detection system and I can now alternate two different video feeds to represent one cameras at each base. For now, I mirrored the test video for the purpose ;-)
The detection runs quite well. The system's motion detection is based on blobs computing an approximate weighted average of the glider CG (more or less, to be refined). If there's no detected blob, the processing switches momentarily to frame differencing. The threshold for this is now partly automated based on stats (standard deviations of mean colors) taken on the "background" frame that is stored short before some action takes place. This ensures that the frame differencing occurs on a "recent" background so that only actual glider motion is computed, and not light changes or older clouds that already moved.
Due to some logic processing that I still have to get better, the system misses base crossing with some flying styles (higher climbs with vertical reentry). So guys, stop flying such turns, else I'll never sleep enough during the wintertime... Just kidding ;-)
Well, one more thing. I know some of you guys are interested in such system, is it for practicing or some maybe even for local competitions. I am glad if you tell me and I'll keep you informed, but it is quite prematurate to ensure it could become fieldproof. I've experienced with my timing system how long it can go until things run smootly and accurately! But as some of you know me, you can be sure once I start scratching my head on such a project, I'll hardly abandon in the middle of nowhere... especially since I know it could become fieldproof ;-)
Wish all of you guys a pleasant weekend.

1 comment:

  1. Hi,
    I was curious what your technical approach was. I am starting work on a similar system for powered pylon racing. Purpose is to do it for the technical challenge and hopefully be useful for practice so we don't need to have two to three people with walkie talkie letting us know if we cut or how long we went. Current approach is linux on a BeagleBoard with a Playstation Eye (great frame rate for a inexpensive usb camera). Software will leverage either OpenCV or the source code behind the Motion framework. Feel free to contact me at Regards, Joe DeLateur