RSS Feed

31 May 2008

Linux off the Desktop

Linux, and open source projects in general, grow in very interesting ways. Because the growth of open source projects cannot be planned. During the Google IO conference, one of my favorite talks was Chris DiBona's Open Source is Magic talk. One of the things explained in that talk was how Open Source software is not planned by a group of executives wearing suits; in fact, it is not planed at all. Most open source software originates with one developer saying "Oh, this would be really cool!", and then implementing the skeleton of that project. Then, Linus's Law kicks in. "Many eyeballs make all bugs shallow." Some developers may be added, and hopefully it is released under one of the licenses that 99.9% of all open source projects use. Although its unpredictability is one of its greatest strengths, it is hard for "industry analysts" to sit down and say that 'Linux is going to go here in five years." Five years ago, analysts had no idea of an eeePC. Of course, this leads to Steve Ballmer saying things like "Linux is a Cancer" on Intellectual Property.


Due to the unpredictability outside of Linux's kernel (which is mostly controlled by Linus and a small group of elite developers) and the current state of the computer market, Linux has grown in interesting ways. Because of its price, it has become an attractive alternative to Windows on low end PCs, because of two reasons: One, these PCs cannot handle Vista's 16GB monstrous installation, let alone its memory requirements, and two, Windows is expensive; even for OEMs it costs about twenty dollars. And in these low-price PCs, every little bit counts.


One of the main attractions of Linux's, and all free licensing, is that there is no implied support. As Chris DiBona said, if you charge any amount of money for your software, you are implying that you are going to fully support it, and you will get emails for help and feature requests. Commercial support for Linux is available, but the community support is just as good, if not better. The community built Linux from the ground up (nearly, see Minix), therefore it is logical that they are the ones that know how to use it the best.


Regardless of the support issues of Linux, it is interesting to see how both Linux and Mac OS X are cornering Windows in the software market; Linux is working its way up from the low end of the scale, with the help of Asus and the likes of the eeePC, and Mac OS X is swiping the "cream of the crop" high end of the market. With the high end users who are likely to pay a lot gone to the Mac, and the plentiful low-cost users switching to a free OS instead of an expensive and draconian rights managed one, Microsoft has no where left to run than be caught in the middle of Apple and Linux. Although Linux is by nature unpredictable, the unpredictability culminating recently may change the scope of the computer industry in the next five years. We just need to wait and see how.

29 May 2008

Touching is Believing ... On a Small Scale

Following the iPhone and its success, Microsoft has decided to bring the touchscreen goodness of a coffee table to its next version of Windows. See here. But is this really a good idea?

The idea of a desktop touchscreen is a tricky concept. Once a developer figures out that fingers are not mice, and that user interfaces should not be designed by the developer, they have to realize that the size of a finger pretty much breaks backwards compatibility with menus. Microsoft has built an empire based on vendor lock-ins and corporate we-absolutely-have-to-use-this-piece-of-legacy-software-that-only-works-on-windows-ness, and if they brand Windows Seven as the we're-so-cool-we-don't-need-a-mouse OS, they may become dead in the water.

28 May 2008

An open world: Spread Virally

I'm here at the Google IO conference, and have just listened to the open social section of the first keynote. One of the interesting things that I found about Open Social is that once a company has deployed their app on a social network, the app is going to spread with about the rate of a virus. Applications on a social network that are picked up by a lot of individual users off of an enormous select applications screen are no good; a social network applications need to be adopted by one's friends. Of course, app developers are not going to hope that friends either hear about their app by word of mouth or another way, apps on a social network are able to send notifications and "invitations" to join the app and share things with your friends. Once a friend sends a notification to another friend, they can either choose to accept it and move on without using the application, or use the application and "pass it on", or send similar notifications to their friends. This provides the "web" diagram commonly linked to social networking. And this spread model is about the same as a virus.

For example, if you assume that each user sends notifications to all of his or her friends, but only three friends really adopt this new app. (Yeah, small number.)if each of those three friends has three more friends that adopt it, there are fourteen people using the app so far. And the number only gets larger. One more iteration and you have fourty-one users, and one more and you have 122. This is the viral spread of social apps; it forms an exponential curve. And, of course, the numbers only get larger. Social apps are not just each person finding and discovering on their own; they are people finding and discovering, and then passing it on. The heavy integration provided by APIs such as OpenSocial are just vessels to get people to let the apps access a contacts list when they send out notifications, and to position them in a place where they are going to be talked about, and thus spread.

Motion in the Mobile Web

One of the advantages that developers on the mobile web has is, if they are going to pursue separating "desktop" sites from "mobile" sights, is that the mobile web is more up-to-date with regards to standards than the desktop web (Here's looking at you IE and, to some extent, Firefox.) Mobile Webkit nearly passes Acid 2, and I am not sure about Mobile Opera. However, mobile browsers, specifically those based on Webkit and Opera, are more advanced than a certain browser that web developers always have to worry about. Plus, all mobile brosers (sans mobile IE) support CSS rounded corners: Firefox has -moz-border-radius, Opera has -o-border-radius, and Webkit -webkit-border-radius. Rounded corners in the mobile web may become a big thing, as it does not require any more images to load.

Today, Webkit dominates the mobile web. Motorola uses it in their MOTOMAGX browser, Apple in Mobile Safari, Nokia for their default browser, and Android. Because of the recent adoption of Webkit as an engine, the majority of the mobile web now has access to a CSS property that may make their whole experience so much better: -webkit-transition and -webkit-transform. I mentioned these two CSS properties previously in my post about the semantic web. Using these properties, web pages can achieve hardware accelerated animations, which is important in a mobile phone, as every little bit counts. And, because most of the mobile web supports this, auxiliary animations may become the standard, making the mobile web a vastly different experience than the desktop web.

Of course, this ignores Windows Mobile, which had 21 percent of the United States smartphone market with all of the individual hardware makers combined, and less than 6% worldwide. (I don't know the actual statistic, but I know that it is less than the iPhone at around 6%. However, a developer has to accept that the mobile web is a worldwide phenomenon. Webkit is the most popular mobile browser, and people may jump on the opportunity to provide the best experience possible to their users, just as developers focus on Internet Explorer on the desktop. Webkit's animations may become what ActiveX used to be on Internet Explorer, and because they originated in an Open Source application and degrade well in browsers that do not support it, it may become a harmless standard.

The Androids! They're compiling!

I'm here at the Google IO conference in San Fransisco, and in the morning keynote we saw a real android phone. My father thinks that it is an HTC Dream. However, the coolest thing was not the way in which the internal compass communicates with Google Street view to provide a three-hundred-sixty degree street view that depended on which direction the phone was oriented; the coolest thing is that it is relatively fast. This is compiled Java. Naturally, this also makes the footprint smaller, as all of that unnecessary stuff that makes the files human readable have been stripped. This is going to set an example for how mobile phones use Java in the future, no more of these silly slow interpreted things.

One more thing: The Android multiple home screens seem like a Frankenstienian monster of the iPhone home screen and the KDE plasma widgets. It is kind of cool how the background moves half a screen when switching home screens to give the illusion of the widgets being closer to the front. Of course, the idea of using a high res panoramic shot for the background and moving it right and left when the home screen is switched is brilliant in itself.

25 May 2008

Web 2.1: Mash-Ups and Agglutination

The next step in the web world of user-generated content is not new ways of generating content, but new ways of consuming more content at once. This new quality of web pages pulling content from different sources and displaying it to the user in one integrated view has become popular with Web 2.0, because a user can see other related things around him, and see how everything connects. An example of a company that nails this is Google. Take a look at any Google Finance page. Google supplies the graph, and a little of the data about the company. However, the majority of it is taken from other places, be it another Google source, such as Google Blog Search, or a non-Google source, such as Reuters statistics. Google makes less work for itself by pulling in data from AOL Finance and Yahoo Finance on its pages, look in the right sidebar. This is also evident in other Google products. Intelligent laziness is a stellar quality in a software developer, and this enables developers and corporations to be lazy and focus on what they're good at (In Google's case, indexing stuff) instead of trying to spread a broad net over every area they could possibly attempt to compete in.


With this new paradigm, there is some risk of confusing the user; all of the other stuff on the page may distract from the main data. However, if the main focus of the page is really big, this should not be an issue. Such mash-ups are made possible by open web APIs and search engines that index everything and then provide that content. It is worthy to note that this is the same strategy that Google uses for online advertising: it displays relevant links next to the content, which the user may become distracted by. If sites use Google's API in this way, it also provides Google with a chance to serve more ads, users are pointed at Google Blog Search and other Google services. Isn't it great that developers can be lazy?

17 May 2008

Making the Jump to the Web

Many applications on the web are now considered by the general public to be "semantic" web applications. In this case, semantic means that it is easier to use compared to the "old web"; it is easier to find, collaborate, and share documents. This can be accomplished through AJAX and the like, or through proprietary formats such as Flash and Java. Paradigms which require the user to reload the page or move to a different page to do something are not considered semantic because they remind the user that they are in a web browser. Ideally, a semantic web page provides an interface that the user is used to, such as .mac web mail imitating the Mail.app interface. Some web browsers provide tools for doing this that degrade gracefully, most notably WebKit (the Safari engine) and Gecko (The Firefox/Netscape engine) provide rounded corners through the -moz-border-radius and the -webkit-border-radius tags. Any browser can provide a system-themed dialogue using javascript, for example:

This input will ask a user for their name and provide a box to answer it in, and it adopts the look and feel of the system it is running on. In this way, a semantic web application can provide a method for user interaction that feels integrated with the system.


A browser that goes above and beyond in terms of integration is WebKit. Integration and looking great on one platform sometimes requires sacrifices on others. However, in most cases, WebKit's integration degrades gracefully, leaving a "normal" experience on another browser. For example, if this page is viewed in Safari 3 or another recent webkit-based browser (like the android browser), hovering over the /rc/etc image at top will cause it to tilt and scale. This is done through the -webkit-transition: and the -webkit-transform css tag. This degrades gracefully in other browsers, leaving nothing behind. See here for more information on CSS animations in WebKit.


Another way to provide more seamless integration is through sliders. It just so happens that WebKit provides a gracefully degrading slider as well: the -webkit-appearance: slider-horizontal or slider-vertical. This will degrade gracefully in other browsers by just leaving a text field for users to enter a value in. For example:


Of course, all of this can be applied to mobile web applications as well. Sites targeted for the iPhone and other mobiles with Gecko and WebKit-based browsers can make use of the appropriate tags, because they don't have to worry about other browsers (besides Windows Mobile's IE). This can be a path to smooth transitions on the iPhone by employing JavaScript for the WebKit animations, or other things

What the "iPhone Killers" don't seem to get

So, RIM has a touchscreen iPhone killer, there is an open source iPhone killer, a Samsung iPhone Killer, and a Windows Mobile iPhone killer. All of these devices have a touchscreen. However, all of these devices also have buttons.


Think of how uncomfortable it is to be typing on these devices' virtual keyboards, either with your fingers or with a stylus, and have to either bend your thumb downward in an awkward way or reposition the device in your hand. This is, of course, true for the iPhone as well. However, it makes more sense on the iPhone as well. On the iPhone, one is going to reposition the device when returning to the home screen and switching activities. However, it is more natural to switch positions when switching activities than to switch positions to bring up a menu to do something in the current activity. The iPhone neglects this model entirely, and just doesn't use menus. If it is not important enough for one of the buttons on the screen, it is probably too distant from the actual function of the application. Granted, some options can be grouped into modal dialogues off of buttons, but those represent distinct groups of the same task. With a traditional "menu phone", there is no distinct grouping to the menus, or even any semblance of standardization. In a touchscreen phone, muscle memory is key, and if these small menus are popping up with different things in different places in different applications, muscle memory will not be built up. For those who think that that's not so bad, consider this: phones aim to get something done where a laptop or desktop computer could not be used or would be awkward. This small space also includes time; if a user is going to be working on something for a long period of time, he or she may as well use a laptop. If things are in different places, can a user get a multitude of things done quickly, quite possibly using different applications? A user may gain muscle memory for one application, but this lack of familiarity loses the "learn one, learn all" principle that Apple and the iPhone really get. However, cell phone manufacturers cry the marketing of "more features", rather than a few, simple features. They don't have to be great, they just have to be simple and consistent.