Making Meaning

It’s been a busy week, but I wanted to get back on the blogging horse before I fell too far behind. Too much thinking, not enough capturing ideas in a more coherent and permanent form.

I spent one day last week at Garage Technology VenturesArt of the Start. I had the presence of mind to record the whole thing on my laptop, but the quality is pretty horrible, so I’ll be making transcripts available over the next week or so (the first session transcript is available here). Guy Kawasaki had some interesting things to say, most notably his comments on the need for entrepreneurs to focus on “making meaning” in their endeavours.

I’ve been thinking a lot about “making meaning” over the past couple of months. At its core, making meaning is about improving the world – for me, this had meant trying to figure out how to take my software skills and apply them to real problems (by which I mean “problems that matter”). The question in my mind has been: is it possible to make meaningful change through software?

In my mind, I have associated “real world problems” with environmental problems – the kind of problems that require engineering residing in the world of the physical. In this definition, the path to meaningful change through software is unclear. Software is so intangible – how can a bunch of 0’s and 1’s save us from ourselves? Isn’t software mostly being dedicated to solving ‘artificial’ problems, problems originally created by someone else’s software?

But the more I think about it, the more I’ve come to realize that software has the potential to play a much bigger role – even though the domain of its influence does not often intersect with the real, physical world – at least not directly. However, by applying the pressure in the right places, software can bring about social change that affects the real, physical world.

To illustrate how, consider the recent revolution in the world of music.

Since the advent of the compact disc, we’ve been distributing music in the most circuitous fashion: translating analog signals into digital bits, only to press those bits into plastic discs that need to be packaged and transported. Seems a bit ludicrous, transforming clean (arguably environmental) digital bits into a physical form that requires additional energy to package and transport. But at the time, the lack of bandwidth made it justifiable – but no more. Cheap computers and peer-to-peer networking software has wrought tremendous change on the industry in a short period of time by exposing the inefficiencies in the current system and routing around the brain damage of an industry in decline.

But the battle isn’t yet over – software is currently locked in an epic battle, trying to match innovation against legal attacks mounted by a music industry unwilling to redefine itself. In the wake of this battle, some truly absurd “solutions” have resulted. Consider the mass of companies offering file ripping services – requiring you to ship your CDs (more transportation and energy costs) to them, so that they can validate that you own the music you want them to rip and place on a DVD or hard drive for transfer to an iPod. That is truly messed up.

The key to winning this battle is to continue to write software that outmaneuveurs the legal attacks – whether its software to continue to destabilize anachronistic media industries by enabling or hiding content distribution, or to provide new models for empowering content creators (I’d argue that the Creative Commons license counts as software, albeit legal software). The key will be to continue to prove the futility of perpetuating the current model, and its indirect fallout in the real world (manufacturing and transportation environmental impacts, for example).

Maybe software can change the “real” world. It’s just a matter of finding the right pressure points. Oh, and figuring out how to make a buck at it while you’re at it.

Doing Things Once

A number of years ago, I had the pleasure of working with Roy Philips at BC Tel Advanced Communications. At the time, BC Tel was heavily investing in fiber optic installations, and the Advanced Communications arm was pushing hard to sell “Ubiquity” – a high-end videoconferencing service – to leverage the fiber investment. At $15K a month, plus installation costs, Ubiquity was a hard sell in 1994. Roy, however, saw an interesting future for the technology, one which sadly has yet to come to fruition.

On numerous occasions, Roy and I discussed the process of university education. I was, as usual, quite disappointed with the methods used by professors to communicate with students. In many cases, university professors just weren’t very good teachers – nor, to be fair, was that their purpose. They were there to do research. Every so often, you’d have a brilliant professor, someone who not only really knew their stuff, but also knew how to make it stick to the inside of the skull of an undergraduate at 8:30 in the morning, despite the student’s half-inebriated state. Those star professors were few and far between – I often wish we could capture those professors, and make them available to everyone.

One-to-many broadcasting was a perfect solution. As Roy saw it, universities should collaborate to find the best professor for a subject, capture his lecture series on video, and make it generally available. Instead of attending lectures, students would watch the video and then attend a videoconference tutorial session with that professor. Thus, the “best” teacher could be made available to the masses, improving the overall quality of education.

Unfortunately, the form of education has changed very little, despite the widespread availability of high-bandwidth networks. In one shining example of thought leadership, MIT’s OpenCourseWare has made the notes from its courses available. In other areas, two MIT professors have released their thermal physics textbook online, completely free. These efforts, though worthy of praise, signal only the beginning of a movement to make education freely and easily available.

To be truly successful, such a movement requires educational content that is both freely available and freely subject to revision to encourage constant and rapid improvement (perhaps the lessons learned by the Wikipedia). It seems that it would in the best interest of everyone to produce and maintain such a repository to not only improve education, but reduce the cost of education as well. I don’t know about you, but I didn’t exactly think it appropriate for publishers to charge me over $100 for the new edition of a Calculus textbook, especially when the concepts hadn’t changed since their creation by Newton. Were these publishers really adding any value?

If free education content were created and unleashed, it could even provide some interesting opportunities for education tailored to the student. In an ideal world, content would consist of not only static text, but also interactive questions to test the student’s comprehension. How the student fared on the informal quizzing could be used to fine tune how information is presented to the student, adapting to the particular learning styles of the individual. As I’ve lamented before, there has got to be a better way to approach education than the rote learning we currently use.

All of these ideas are far in the future – for now, why don’t we try to eliminate the empires being built on public domain knowledge? Seriously, how much has first-year physics changed in the past hundred years? Very little – so why should a student pay $100 for that knowledge? It will be interesting to see MIT lead the way by demonstrating that we can be more productive as a society by doing things once by doing them right the first time.