SwiftVision Supplemental Materials


Swift Vision

Recently, I posted a “short” video about the CM5 lights and transcribing their states automatically in Swift. Here is a link to the github repository. I’ve just committed an update that includes and enables “Mode 5” of the LED panel, in addition to the “Mode 7” that I had before.

For fun, here’s a new histogram of the mode 5 video, which I captured on an iPad.

histogram

Also, the timeline for mode 5 (really more a function of the lighting and ipad)

timeline

What I’m sure iskunk and Mark are really after is the transcribed animation steps:
Mode 5

  1. #1 by iskunk on April 10, 2016 - 1:57 pm

    It’s great to hear that you’ve been pleased by the comments on that blog post—as you alluded, this tends not to be often the case on the Internet!

    I grabbed the mode 5 transcript, and compared it with the output of my program. Here are my observations:

    Step 0, I’m assuming, is just the panel before it starts up.
    Step 1 is identical to step 429. So this appears to be uninitialized memory being displayed for 600ms, presumably after you’d run the panel in mode 5 for about 85 seconds…
    There are a number of extra steps in your transcript that appear to be thresholding errors. The first ten of these are 26, 32, 38, 44, 50, 56, 62, 68, 70, and 76. Note that each is offset by two or three frames from the preceding one, instead of five. The subsequent frame, on the other hand, is offset by five.
    Step 78 is the first place where I see disagreement on part of the pattern. For example, for the top row, your transcript has 0x80FC versus 0x8088 from my code. That appears to be persistence from the previous frame.
    All that aside, the transcript and my code do appear to be largely in agreement. There are certainly runs where the two proceed in lockstep.

    tl;dr Computer vision is hard ^_^ I hadn’t expected that you would implement it basically from scratch, rather than use OpenCV or the like, although admittedly that would have been overkill for this project. AVFoundation sure seems to make extracting individual frames a lot harder than it needs to be, like they completely did not care about that use case.

    Interesting seeing Swift, too; I wasn’t familiar with it. I see some Pascal-like bits (e.g. function return-value syntax), and language support for getter/setter calls (one of the few redeeming features of VB.net, which I had to deal with many years ago). Would be nice to see this standardized.

    (Yes, “eye-skunk” is how you pronounce his name 😉 I hadn’t come by here in a while, but just by chance revisited the LED-panels post after Jim’s first comment. That polynomial pretty much hooked me in.)

  2. #2 by hpux735 on April 10, 2016 - 2:37 pm

    Cool! I’m glad the transcript (mostly) lines up. The start-up behavior is interesting in both mode 7 and 5. In Mode 7, it looks like 1’s are pre-set into the registers and are slowly shifted out whereas mode 5 just jumps in with random (likely un-initalized as you say) data.

    It’s interesting to single-step through the frames (especially in the mode 7, because I used my DSLR) because you can see that either the new data is output on the LEDs bottom-to-top or the CMOS sensor is scanned top-to-bottom.

    Computer vision definitely is messy business. I’ve never tried it before, and it’s a relatively simple problem, so I thought it would probably be quicker to hack together something in Swift than to learn OpenCV from scratch. And, yes, it was awesome to get images from the movie, but getting a bitmap from the image was overly complex in my opinion.

    I really love swift. I feel like I get get my ideas down very quickly, and yet it’s easy to write relatively safe code.

    This has been a really fun experience. In the past 5 months the Internet has been really rad. Between jumping in on open source Swift and this I have a renewed sense of hope for humanity. 🙂

  3. #3 by iskunk on April 11, 2016 - 1:48 pm

    Those progressive/slurred updates sure don’t help the cause of automated transcription. I used a bit of Perl to filter out those bogus steps, and while the end result compares a lot cleaner, there are still anomalies—for example, a step got dropped right before step 79.

    Still, this was a cool hack 🙂 And I do hope that these newer, better-designed languages can evolve to the point that they are good replacements for C/C++. Security-wise, we’re long overdue.

    Dillon, could I ask one more favor? I’d like to finalize that LED-panel code, and need just a couple more details. Jim said that the panel has a rotary switch to select the mode, and I’m assuming that modes 5, 7, 9, A and B (minus the freeze modes) are the full set that can operate without a PM. (Obviously, 5 and 7 are the ones we really care about; the other three are just for completionist’s sake.)

    Could you film a short video that is just switching between the different modes, letting each run for a few seconds? It doesn’t have to be lined up squarely or stably or anything, since there’s no further need for wholesale transcription. Mainly I want to see how each mode starts up, and confirm the timing and such. Oh, and to see mode 7 start up a few times, just to check if that “glitch pattern” is consistent or not.

    (I’m guessing that the colored status LED only does that sequence when the panel first powers up?)

(will not be published)

Please complete this capcha. I get almost 1000 spam comments a day! * Time limit is exhausted. Please reload CAPTCHA.