08.24.08

The Future of Television

Posted in Media, Tech at 12 pm

A brief rambling of thoughts regarding television and video transmission as they will evolve in the coming decades:

1) The end goal? The Star Trek Holodeck: a 3-D representation of a scene that can be viewed from any angle. Putting aside the hokiness, this is what TV is heading towards: a reproduction of an environment in all physical dimensions.

2) In order for this to be feasible, flat 2-D capturing is useless. Video today is taking a series of bitmap images. The next gen of video will be just be stereo 2-D: 2 images of the same scene at the same time. Great, so we’ve replicated the depth of a scene, but we’re still stuck with the single perspective of the original pair of cameras.

3) If stereo images for ‘faux-3D’ isn’t enough, then what we need are more cameras, right? Well, then where does that end? Do you build a giant sphere of cameras, all pointed towards the center of the action? This might work okay for a movie like Cube but for, let’s say filming a climb of Mount Everest, this isn’t the way to go.

4) There are two basic ways of representing images in digital formats: Bitmaps or Vectors. Bitmaps are grids of pixels: perfect for paintings, documents and flat video. Bitmaps are great for when you have an image that you might want to make smaller, but they are useless for making bigger. If you take a 100 pixel by 100 pixel image and make it 1 mile by 1 mile, you’re going to get individual pixels that are 50 feet by 50 feet. However the same image made up of vectors could be made of very small 1 nanometer pixels and still be an accurate representation of the image.

5) If we want the ability to view a scene in all of it’s physical dimensions, we will need to capture the points in space (x,y,z coordinates/vectors) of as many elements as we need in order to re-create the scene. Take track events portrayed in a movie like Chariots of Fire. In order to truly capture the event, we’ll need to track the spacial locations of every significant element. I would guess these to be the track, the starting line, the finish line and the runners.

6) This should be subdivided down further however. Not just the runners, but the various body parts of the runners: legs, arms, heads. Maybe fingers? How about the starter’s gun? the trigger on the starter gun? the finish line tape?

7) We need to decide what’s truly important to capture: The runners, yes. The starting line and the finish line, yes. The crowd? Mmmm, maybe. Certainly films for decades have been using ‘standard crowd noise’ in place of recording actual crowds on the set of the film. Movies have been adding crowds to stadiums using mannequins, inflatables, or digital post-production. Maybe the specifics of the crowd are unnecessary for the scene.

8) We need to capture as much as possible, but we could extrapolate from a small set of points a number of the other points. Perhaps we know where the starter gun is, but instead of keeping track of the official that is pulling the trigger, we simple estimate the height of a person that would be holding a gun at a certain angle and height and make an approximation of the official. We know how the ribbon at the finish line would move and float given the motions of the players and the wind and the tautness of the tape. Do we need to know the exact location of a runner’s knee if we already know where their hips and toes are at? Maybe, but we probably don’t have to know where the ankle is at if we know where the heel and the knee are at.

9) Once we have those points in space, we can recreate the locations, but short of capturing the location of every thread of the clothing being worn or each lace of each shoe, we’re probably going to want to capture a ‘skin’ or a ‘texture map’ that would be used to wrap around the skeletons (vectors) of the runners. The skin could be captured ahead of time, or could be extrapolated from a video feed. We’ve already seen projects that take varied photographs and collects them into a multi-faceted view of a single object. In much the same way, a set of stills taken over time could create a texture map.

10) That same capture of the texture maps could be used to extrapolate the x/y/z of the original skeletons. Today’s motion capture techniques have relied on ping-pong balls taped to actors in green body suits and similar set ups. Those configurations are simply work-arounds that allow us to capture the models easily with today’s technology and are ultimately, unnecessary. Once we have the visual processing tools that are necessary, we can forgo the artificial set ups and special configurations and rely on the original video captures.

11) This sort of capturing and transmission becomes possible once we move from thinking of capturing a flat plane of pixels to capturing the coordinates and texture maps of a scene. The information that is captured can be still captured by a single video camera, given enough processing power. But when we add a second camera, we can collect better textures and more accurate coordinates. Add a third and the quality of the capture increases again. Add a dozen and you’re capturing every detail needed to analyze an event in everyday scenarios.

12) What does this all offer? Imagine watching Chariots of Fire from the actual point of view of one of the runners. Or from the officials. Or the finish line tape, or a shoe of the runners. Or directly overhead. The amount and number of perspectives is immense. Imagine changing the scene by adding a 100 mph wind to it. Or altering the track so it goes in a loop-de-loop.

13) And talk about scalability: If you want to transmit this scene to someone, you have the option of A) sending a fully-rendered image like you would to a current television, B) a pair of images to a stereoscopic video display (Yes, that’s by my employer), C) or a small set of the captured data to a cell phone/Personal media device for display of a low-res, animation style rendering, D) or a full feed of all the details to a computer-enabled display that could use a mouse or 3-d mouse that could be used to navigate around a scene.

14) Today we are capturing the equivalent of a single, low quality texture map. Soon we will be capturing higher quality single texture maps, but this is just a baby step forward. We need to build tools that will take those bitmaps and break them down into component parts: Vectors of skeletons, plus texture maps. We blend in approximations of the missing texture, enhance the scene with up-close photos, and extrapolate to fill in the additional x,y,z coordinate points we’re missing. None of these techniques are outside of our reach.

04.19.08

Einstein on ModBook

Posted in Apple, Tech at 1 pm

Drooling, sputtering… Einstein on ModBook… WANT!

04.16.08

History meme

Posted in Tech, Web at 10 am

From the History meme, which I find kinda fascinating. Here’s mine, with the command split into three lines for display purposes. If you try this, you should put it all on one line.

history | awk ‘{a[$2]++}END
{for(i in a){print a[i] ” ” i}}’
| sort -rn | head

72 ping
72 ls
63 curl
62 cd
37 cal
32 whois
20 ssh
19 man
15 sudo
7 traceroute

I would suspect this is a common set for most web developing people. That ‘cal’ entry is because I often want to pull up a quick calendar for the year to check dates, but I don’t want to open iCal. Terminal is often open so it’s an easy reach to type ‘cal 2008’ or somesuch.

03.03.08

Wither X-UA-Compatible?

Posted in Tech, Web at 3 pm

IEBlog : Microsoft’s Interoperability Principles and IE8:

We’ve decided that IE8 will, by default, interpret web content in the most standards compliant way it can. This decision is a change from what we’ve posted previously.

Yay. Nice of them to come to our senses.

02.20.08

WebVisions rides again

Posted in Design, Media, People, Tech, Web at 12 pm

While everyone else is getting ready for SXSW, the WebVisions board has been busy getting this year’s edition ready to go. I’m really excited that Jeffery Veen is back. His presentation (5 or so years ago) was one of the best ever and even though we usually don’t have return speakers, Veen is one of the few that I’m truly glad to hear again.

WebVisions: May 22 – 23, 2008 – Portland, Oregon

Media, technology and consumer trends visionary Lynne Johnson will join WebVisions to deliver the Thursday keynote address. Lynne is the Senior Editor and Community Director for FastCompany.com, a leading website and community for people passionate about business ideas that also offers the complete content of Fast Company magazine. She also writes a technology blog following web, media, and consumer trends for FastCompany.com, and guest blogs for techPresident and Black Web 2.0.

An internationally sought-after sage, author, and user experience consultant, Jeffrey Veen will return to WebVisions to deliver one of the event’s keynote addresses. Currently a Design Manager and project lead for Google’s Measure Map project, Jeffrey is returning to WebVisions to share his vision for the future of the Web.

At this point, WebVisions as an event runs really smoothly. We get a good set of volunteers returning each year, and my Tech Crews are always on top of things. I’m the stage manager and try to make sure that each speaker is prepared and comfortable, the audience is undistracted, and the volunteers understand that the audience members are expecting to have a great experience and we want to give them an outstanding experience.

WebVisions is incredibly cheap and for the quality of the speakers and the location, it cannot be beat. I hope you’re coming!

02.10.08

Corny SF Joke

Posted in Media, Tech at 8 pm

Q: What did the Dalek dermatologist say to it’s patient?

A: “Exfoliate!”

(I’m truly sorry… I thought that up this morning. I realize that there are only a small sliver in the venn diagram subset between the circle of “UK SciFi TV enthusiasts” and those “familiar with skin care techniques.”)

02.03.08

A fresh iPhone each morning

Posted in Apple, Media, Tech at 1 pm

I’ve got a couple of short podcasts that I’m subscribed to that are daily. Merriam Webster’s Word of the Day and Scientific American’s 60 Second Science are short little podcasts that are an interesting way to start my daily commute.

However, it’s bugged me that when I drop my iPhone into it’s cradle at night when I get home, it syncs up the podcasts at that time. This is anywhere from 6 to 11pm. However, the next day’s podcasts are not available until after midnight. In order to get the Word of the Day on the Day of the Word, I’ve been resorting to picking up my iPhone out of the cradle each morning as I’m rushing out the door and dropping it back in and then waiting for it to sync. Sometimes this is just a few seconds, but if one of my other longer podcasts subscriptions had an overnight update, it can take a few minutes.

It’s a small matter for modern living, but I figured there ought to be a better way. iTunes doesn’t have a native way of telling an iPod or iPhone to refresh at a certain time. There’s two times when the sync will start: If you hit the Sync button in iTunes and when iTunes first connects with the iPhone.

The sync button method is a no-go for me, because it A) requires me to do something and B) in order to do it, I need to have the screen turned on and the mouse ready to click.

But when I say “when iTunes first connects with the iPhone”, there are a multitude of ways that this could happen. It could be the time when I plug the iPhone in while iTunes is running. It could be when I restart the Mac and iTunes automatically launches and finds the iPhone connected. Or it could be whenever iTunes gets launched. All that needs to happen is for iTunes and iPhone to become disconnected and reconnected.

So what are my options? I could have the Mac on an outlet with a timer on it and force the Mac to power down and then start it back up again. I could set the Energy Saver preference pane to schedule a shutdown and startup of the Mac. I could build a contraption out of Legos that would lift the iPhone out of it’s cradle and then slam it back down again. I could have a similar contraption that pulls the USB cable out of the Mac and plugs it back in. I could put the USB hub on a timer at its power connection.

But far more simply, I could use AppleScript to tell iTunes to quit and then tell iTunes to run. The key is getting the said script to run at the appropriate time. The easiest way of doing that is to schedule an event in iCal and using the alarm function to trigger the script. So here we go…

1) In your Applications folder look for the AppleScript folder and then open the Script Editor.

2) Type the following lines:

tell app “iTunes” to quit
delay 30
tell app “iTunes” to run

3) Click the Compile button and you’ll see the code get nice and formatted, color coded even.

4) If you want to test it, click the Run button. iTunes will quit if it’s already running and then 30 seconds later it will re-launch.

5) Save the script and call it something obvious like “iPhone Refresh”. I saved it to the Documents folder, but you can save it anywhere. You don’t need to set any other options in the Save dialog box. The defaults are fine.

6) Open up iCal and double click on the time of day when you’d like the script to run. I set it up to run at 6am.

7) Set script to repeat daily.

8) Set the alarm to “Run script”.

9) Below the Run Script setting click and select “Other…” and then find the script file you just saved.

10) Set the “Minutes before” to zero.

That’s it. I found lots of other ways to specifically choose the “Sync iPhone name” menu item, but they were 5 to 10 times the amount of code and with no further advantages. My method will refresh any and all iPhones or iPods connected to the machine, it will disconnect anyone that is ‘sharing’ the iTunes library, and it will help stave off any memory leaks that iTunes might develop. These are unintended consequences, but in my situation, they’re all good ones.

02.02.08

X-UA-Compatible

Posted in Tech, Web at 1 am

<meta http-equiv=”X-UA-Compatible” content=”IE=8;FF=3;OtherUA=4″ />

Ah now there’s the rub. This whole thing with the X-UA-Compatible HTTP header has been basically portrayed as how the web creation community can bail Microsoft out of a tight corner that it got itself into. With both Zeldman and Meyer supporting this and many, many people railing against it, I’ve been trying to figure out where I stand in this. Here’s my gut reaction and how I got there.

  1. It makes perfect sense for Microsoft to do this.
  2. It makes perfect sense for everyone else to ignore it.

1) This is the next version of DOCTYPE switching. No it is not. DOCTYPE switches were not simply a method to choose a rendering engine. Using the DOCTYPE to switch to ‘standards’ mode worked well because using it made the pages markup more valid. It was a situation where the hand of standards was slipped into the glove of DOCTYPE. X-UA-Compatible does nothing of the sort. It simply adds more information to the header of a page. (Interestingly enough, this is exactly what HTML5 is shooting to *reduce*.)

2) “IE=8;FF=3;OtherUA=4” Exactly who designated IE as an ‘official’ abbreviation for the browser from Microsoft? And who said that FF was adequate to represent all of the multitude of Gecko-based browsers out there? Talk about arbitrary. I would hope that Microsoft would put forth some sort of official registry for these “browser codes” like we have for MIME types and for Unix communication port numbers. At this point, this is going to be as helpful as User-Agent strings…

3) What IE=7 really means What Microsoft is stating with all of this is that they are happy to designate IE 7 as being their final answer, their best effort to present what I’ll call “IE7HTML”. Much like HTML 4.01 or XHTML, this is a specific flavor of HTML. Our friends in Redmond are also declaring IE7HTML as the final version of HTML. This stems from the idea that all pages that are not designated with IE=8 (or 9 or 10 or ‘edge’ or whatever) will default to IE 7’s rendering engine. So that’s it. HTML 3.2, HTML 4.01, XHTML can all be put out to pasture because IE7HTML will be the default way for MS’s browser to render the World Wide Web.

4) Smart for Microsoft This is incredibly intelligent for Microsoft. Here we have a great language (IE7HTML) that can be used to present Web Pages and is really good at Documents, and can be forced into use as Applications. The IE7HTML language works with all of that ‘interesting’ code those goofy guys working on Word used for their ‘Export to HTML’ function. What a bunch of comedians over there! Who can forget such funny tags as “<o:p>”?

But then, when IE7HTML becomes ‘old news’ or too limiting, where will we go next? HTML5? HTML6? No! We will need to abandon HTML entirely, because unless the web creators write code that specifically tells IE to use a later rendering engine, it will always default to IE7-style rendering, sending us right back into the arms of IE7HTML. In order to break out of it, we’ll need some new web technology that doesn’t use HTML at all… What about Flash? Oops! Flash requires the OBJECT or EMBED tags from HTML. I wonder if anyone has a technology that displays rich graphics and advanced (ie Desktop Application-style) interfaces. Well golly gee! Someone does! And it’s a good thing that it’s being fostered by a company with such a passion for open communication and shared standards.

Gosh isn’t it funny how selecting the default rendering engine to be the outdated version caused Microsoft’s last best chance of controlling the web to become a viable alternative?

5) Ignore it and it will go away. In this case, the old sarcastic admonition might be true. If no other browser respects this X-US-Compatible tag, and if only a small minority of web creators and Microsoft tools support this, the overwhelming majority of the web will continue to evolve and grow and adapt to new technologies. Looking 10 years down the road, the bulk of the web will use new technologies. The IE browser family and later generations, by defaulting to the IE7HTML rendering, will become increasingly outdated. It will certainly be able to read old and outdated web sites, but yet again Microsoft will have painted IE into a corner. They will then create a new browser (Windows-Yahoo-Live Explorer anyone?) that will skip forward to modern era web pages.

6) Is there any chance we could have a <sarcasm> tag added to HTML5? I’m actually serious about this.

01.25.08

Who Are You?

Posted in Life, People, Web at 10 pm

From Jock:

Lemur-Labs – Who Are You?:
It is quite possible that half of the reason that people watch the various editions of CSI can be attributed to the brilliant choice to play The Who during the opening. The selection of
Who Are You? is especially brilliant. It speaks directly to the core of any criminal investigation: establishing identity.

It’s cool to listen in while the wizard makes up his latest spell… Even cooler when you get mentioned. : )

01.20.08

Price quotes for website, the quick way

Posted in People, Web at 12 am

Ben, another suggestion on how to respond to this question…

Q: “How much does a Web site cost?”

A: “How much does a book cost?”

This usually sets the stage pretty well since people start to understand the possibility that all web sites may not be the same size, even though they are viewed through the same browser window…. This tactic has worked well for me in the past.

(I didn’t think it was that pithy…)