About Video Filters// When we were working on the Super Top Secret V2 site, we were trying to figure out what ASCII image we should include in the page source. A couple of the guys were experimenting with on-line converters that take an image and convert it to ASCII characters. This gave me the idea to do it with video in real time. I wanted to do it with a YouTube player, where you would enter a URL for any YouTube video and you would see a normal YouTube player, except the video was replaced by ASCII, but turns out the YouTube terms of service won’t allow this. So I had to settle for webcam footage.
Something I didn’t like when looking at on-line converters were when they use colored characters. I consider that cheating, so I left that out. Rather than hard coding which characters should represent which shades, I instead do it dynamically when each color scheme is selected. When it instantiates, I run through all 95 ASCII characters in a for loop (represented by Unicodes 32-126), add the single character to a text field, take a snapshot of the text field on its colored background, run through each pixel of that snapshot totally the red, green and blue values, divided by the total number of pixels in the snapshot, all to end up with a number between 0 and 765 that represents black to white. I took the character and the brightness value and create an object of them, and place them on a Vector where their index represents their brightness, except offset to account for the darkest one now representing black and the lightest one now representing white. At this point, some of the indexes will have multiple characters on it, and many have none at all. I then loop through all the indexes that have multiple characters and choose just one of the characters at random to represent that shade. Then I loop through all indexes and any empty ones I make a copy of whichever nearest index has a character. In the end I am left with a Vector that is 765 long, each index containing one character to represent that brightness. Then onEnterFrame, I grab the snapshot of the video footage, reduce the height and width matching the rows and columns of characters that fit on screen (also calculated dynamically) and then loop through each pixel of that snapshot adding a character to a String that matches that pixels brightness. At the end of the loop, I set a text field to display that String.