Portable Native Client - It begins

The long slow death of JavaScript as a reasonable platform for "apps" begins.

Portable Native Client: The "pinnacle" of speed, security, and portability

I've been deeply critical of HTML5 and JavaScript as a platform for general purpose applications for several years now. HTML5 is a step in the right direction as far as layout goes, but JavaScript was never intended as a target for compilers and is intrinsically inefficient.

I always felt that the push toward JavaScript-based web applications by Google was premature in that HTML5 was incapable of efficiently running applications. Even if you had a sufficiently fast machine to get reasonable performance out of a "web" application, you would do so at the expense of battery life. Assuming identical algorithms, a web application would consume far more cycles than its native equivalent for the same unit of work. This, at a time when more and more devices are battery powered and portable. The push was folly.

Up until today, Native Client had three problems:

  1. CPU architecture specific.
  2. Lack of rich APIs / native widget / platform bindings.
  3. Chrome only, not an open standard.
The first has now been addressed, once two & three are resolved, then the "web" will finally be a platform that can potentially replace modern OSs. The originally presented vision of Chrome OS was always doomed to fail. The "web" as existed at the time was not a platform where applications could be barely anything more than form-like curiosities. Anything heavy, such as IDEs, photo editors, games, would suffer embarrassing levels of performance and/or battery life.

It is my hope that, through Native Client - or similar technology, the "web" does become a cross-platform source of native applications. Such a development would be of monumental significance as finally it will be possible to deliver on the age-old promise of Java - write once, run anywhere - but without the bloat of early Java or the current web. Curated collections of applications will always have a place but the walled gardens will fall and applications, developed without comprise (unlike todays web "apps") will have a potential audience of everyone.


Android 5.0

Features I'm expecting / hoping for in Android KLP (5.0?) :

  • A bundle of usability tweaks.
  • OS-wide features to selectively hide soft-buttons on an app-by-app basis (possibly requested by the app, possibly overridden by the user).
  • Android desktop-mode, with ability to load apps into resizeable windows or ability to snap-apps Windows-8 style. The desktop mode would improve navigation with keyboard and mouse, automatically disable the soft keyboard and allow for Android to be used with larger monitors.
  • Ability to toggle DPI on app by app basis (as per Paranoid Android).
  • More social media integration within the OS via open apis with Google+ baked in.
  • Better Bluetooth keyboard integration with full-keyboards supported, and ability to optionally automatically disable soft-buttons when they are available via keyboard.
  • Optional split-software-keyboard, useful for larger phones.
  • Thumb-mode within the Smartphone UI , where users can put into a mode where the screen is scaled down to an area accessible by a thumb, aligned to bottom right or bottom left. Useful for commuters on train that must operate ever-larger-screened phones with one hand whilst holding a rail with the other hand. Thumb mode could be activated explicitly (possibly via the power button menu), or if the hardware supports it, have hardware detect the thumb grip (phone held with right hand if right handed, held with left hand if left handed). In that grip, and if auto-thumb mode selected, then automatically scale the screen down to the maximum range of the thumb (if no hover detection hardware present), or if the touch-screen supports hover detection, only scale down if the thumb is hovering or touching the screen. That way, the phone could have a large screen, but be entirely usable with one hand without ever having to manually set between thumb mode or non thumb mode. Sensors would be required on left and right sides of the phone to detect two or more fingers on one side, used in conjunction with the user specifying (once) if they are left or right handed, you have a phone that uses the full screen to display information, but when the user moves their thumb over the screen, the screen scales down so that ALL UI elements are in range. If holding in the two handed grip (right handed person holds the phone in their left hand, and vice versa), then thumb mode will not be activated as the index finger of the opposite hand will be assumed to be use (which has full range of the screen).
  • File-explorer integrated into the OS, possibly simplified from the actual-OS.
  • Simplified app backup (to desktop/cloud) built into OS.
  • Add ability for applications and/or users to load the app in battery saver mode. For example, lots of apps clock the CPU/GPU to max when running in the game-loop, but the max clock might consume 50% more power than at 80% max clock, and the app has no use for the extra clocks. Limiting the clock speed of CPU/GPU could lead to much longer gaming times for lots of apps that are not best optimised.
  • A Desktop-Linux Compatible Kernel so that Android devices can run Ubuntu etc.
  • The Android Application Stack being refactored into its own project so that it can be supported by any browsers (such as Chrome, or Chrome OS). Apps that don't use phone specific features will be able to executed natively via the web. This will include native apps utilizing C++, as such, they will run like lightning next to JavaScript/HTML5 apps and be able to access more hardware features directly (or Google might go Native Client all the way).
  • Many more voice search features especially being able to use multiple languages without having to explicitly switch the language.
  • More gesture-based navigation features, possibly the addition of an conical menu based UI, and possibly deprecating the soft button bar or making it optional.
  • A brand-new innovate launcher that doesn't use icon grids.
  • Addition of bandwidth management APIs. At the moment, Wifi is all you can eat, 3G tends to be consume less bandwidth where possible, but no option to consume less bandwidth on Wifi. Users need the option to minimise data use independent even if on Wifi (as lots of portable hotspots charge per MB). This is increasingly important if Android is to become a Notebook OS.
  • A game-pad only navigation mode built into the OS.
  • Support for H265.
  • Portable Native Client support for the NDK, all apps automatically support all current CPU architectures (ARM, ARMv7a, MIPS, x86) and all future architectures at 99% of native speed. I imagine Intel is especially keen on this being implemented.


Google I/O 2010, Keynote 1 Impressions

The first Keynote of Google I/O 2010 had a few highlights:

  1. HTML5 my Lord - Kumbaya

  2. Native Client Underplayed

  3. The announcement of the Chrome Web Store

  4. Adobe focusing on HTML5 tooling

  5. The Open Sourcing and permissive use licensing of the VP8 codec

  6. The partnership of VMWare/SpringSource and Google to allow developers to create portable Cloud Apps

HTML5 my Lord - Kumbaya

Rightly so, Google is heavily pushing its weight behind HTML5 as a way to expose greater client-side functionality to web-developers whilst still remaining in a secure environment.

Development team after development team were rolled out onto stage to wax lyrical about how great HTML5 is and how it allows them to create richer web experiences.

Very true, the demos all highlighted functionality that was not possible in HTML4, yet, something was off. HTML5 allows the server to send the client JavaScript code to run locally and also allows for the code and resources to be locally cached so that an application can be ran offline (the server can create a client-side database cache). All this is great and a definate step forward.

BUT, HTML5 still sits on top of the DOM and JavaScript. That makes them slow. I assume that the PC used in the demos was capable, and even if not, you would have to go back 10 years to re-create the same sort of sluggishness in a desktop application that was demonstrated in these HTML5 applications.

For the majority of apps, it simply will not make a difference. If you are dealing with text and forms with the occasional diagram on your page, HTML5 will solve all your needs, as-is.

But, I think that it is a fundamental mistake to think that web-apps can compete with desktop apps with the current iteration of HTML5.

Desktop apps need real bindings to the environment on which they sit. They require access to the USB ports, the microphone, the webcam, the displayport. Desktop apps need to run at native speeds and not through a layer of JavaScript, that needs to be contantly runtime evaluated for optimisation due to the language being designed for maximum flexibility, not speed.

Desktop apps need to be able to control screen refresh and to be able to synchronize on vertical refresh. Desktop apps need to be able to run full-screen if required. They need to be power-efficient and to be able to perform their task with in the smallest amount of time.

Some HTML5 apps will not be subject to performance considerations nor the considerations of file-system or hardware access. For these apps, HTML5 represents a huge leap forward, but if the intent of Google is to move all apps onto the cloud as a combination of server and client-side scripting, then HTML5 needs to be updated to allow for native-execution-speed and better bindings to the host device capabilities.

Native Client Underplayed

As if to acknowledge that JavaScript will not meet the performance expectations of web-users accustomed to desktop applications that run at native speed, the keynote mentioned in passing the 'Native Client' project but downplayed it at this time as it undermined the main theme of HTML5 being the Alpha and the Omega.

The Native Client project allows for Web authors (via a browser plugin) to write code in C++ to run natively for some web applications. The source-code is compiled and made safe via the pre-processor (making sure that code cannot overwrite or jump to code outside its designated memory area). The code is then executed via the browser in a sandboxed environment. The code is said to run at 97% of native speed (likely due to the limited amounts of addressing modes accessible within the sandbox).

Whilst this is currently not part of the HTML5 specification, I hope that it is added sometime in the next year as native code execution is the missing link in HTML5. Of course, native code execution means nothing unless it is portable, so I would not support this development unless there were a virtual machine option to ensure portability whilst allowing for an order of magnitude greater performance than JavaScript. I understand that Google is investigating using the LLVM for this purpose.

The announcement of the Chrome Web Store

The Chrome Web Store was rolled out and several apps were highlighted. This was the biggest mis-step in the presentation for none of the apps seemed to equal a desktop app, and the best web 'app' demod was 'Plants versus Zombies' which made me sit up and take notice for a few seconds. It seemed to be running at a good speed and to have the sound working in the game (something I have not seen done with any degree of success in HTML5 so far). Then, it was revealed that this web 'app' is was actually running in Flash. So much for the theme of the keynote (HTML5 is great). We all know that Flash is supposed to be evil and bad, but it still provides functionality that HTML5 can only dream of for-now.

Someone came out and showed an offline demo of a web-based Photoshop-a-like application. The performance looked fine for the simple use-cases he detailed but I couldn't help but wonder how many users would pay $5 for essentially a web-page experience.

Adobe focusing on HTML5 tooling

Adobe's CTO, Kevin Lynch came on stage to 'focus on HTML5'. He showed off some really nice tools for developing HTML5 web pages and tooling based around the new CSS elements. It worked nicely, but it was a simple use-case. Its obvious that Flash remains a necessary evil until HTML5 pulls its act together and its nice to see Adobe transitioning to the open standard whilst still supporting the closed standard.

The Open Sourcing and permissive use licensing of the VP8 codec

As predicted for some time, Google took this opportunity to announce the open-sourcing of the VP8 codec. This is a very efficient codec in the same league as H264 without the patent issues (maybe).

FireFox has long been in a predicament with regards to how to implement the open HTML5 standard that allows websites to embed proprietory codec encoded videos within <video /> tags. The browser must be responsible for rendering the video but a truly open browser does not and cannot ever afford a license for a proprietory codec. FireFox publically flailed and supported Ogg/Vorbis as the best Open-Source alternative although it was obvious that H264 was a much better implementation. Google itself had a conundrum, how can it move its videos to HTML5 when different browsers support different codecs. Would it have to re-encode all the YouTube videos for every combination of browser supported codec?

It looks like in the end there will be 2 codecs for youtube, H264 and VP8. H264 has to stick around because current-gen mobile devices have hardware acceleration for H264 wheras VP8 has to be decoded in software (for now) leading to potentially sluggish performance and/or poor battery life. This should change in the next few years as hardware support for VP8 should be added to most SoC solutions.

Personally speaking, I hope my Atom 330 ION based net-top has enough CPU/GPU power to software decode VP8 720P streams (1080P is probably out of the question).

It also remains to be seen how the MPEG LA group respond to Google attacking their cash-cow. Its obvious that there are enough devices out in the wild and enough support of H264 (Blu-Ray, Mobile devices, etc.) that H264s continued survival is ensured. They may respond with litigation, or opening their own codec, or perhaps just mine the existing licencees whilst developing a follow-up 3D optimised codec. Who knows?

The partnership of VMWare/SpringSource and Google to allow developers to create portable Cloud Apps

This part was interesting to me. Google App Engine is cool, GWT is cool, but building on Google App Engine always felt like vendor lock-in to me. Glad to see that there will be competition in the cloud for hosting. Google could have tried to tie in app developers to using their hosting, but it seems that an App developed using App Engine will be deployable to a number of different targets.

The Roo demo was nice but I'd seen it before on youtube and I really hate remembering command line commands so was a little disappointed that the presentation never detailed the GUI tools.

I like the idea of GWT 2.1 supporting multiple profiles and it being able to detect devices and adjust the look and feel accordingly. I'll be using 2.1 myself in the near future I'm sure.


Android takes the lead

The smartphone OS war continues to rage on with Android making serious inroads into Apple's share of the pie. Statistics supplied by AdMob. In December 2009, iPhone OS enjoyed an 11% lead over Android OS. Just three months later, and now Android enjoys a 7% lead over iPhone OS.

These above statistics only factor in web traffic to mobile ads served by the AdMob network which is currently the largest provider of mobile adverts and monetization. These statistics are unlikely to represent relative sales figures as Android phone users may spend less time in apps than iPhone users, but the statistics demonstrate the trend toward Android and from iPhone OS.

We can expect the current trend to continue as the HTC Desire and HTC Legend hit the market soon.

iPhone demand may be temporarily dampened by the expected upcoming release of the new iPhone model likely to be announced in June as customers wait to see what the new phone offers or waiting for a price-drop on the current models. This yearly release cycle is sub-optimal as smartphone customers in the market for a new phone often desire the latest and greatest and there is a new shiny phone every month for the tech-lovers, each of which shinier and fresher than the Apple phone from 6 months ago.

Android OS looks set to gain from Apple's lethargy by the continued monthly onslaught of new Android handsets with new form factors.

Unless Apple chooses to license iPhone OS to other handset manufacturers or simply announce quarterly iPhone hardware releases with new form-factors, there seems to be little that Apple can do to halt its diminishing market share.

That said, the market itself is growing and given the current deltas and the upcoming Windows Mobile 7 release, iPhone OS is likely to settle at around 10-15% of the market. This is approximately the same place that macs hold in the PC market. Android devices benefit from scaling up and down to cover the all segments of the market (value to premium customers) whilst Apple is only targeting premium customers at the moment.

The report also details that both iPhone OS and Android have yet to seriously penetrate developing nations with Nokia and Symbian being overwhelmingly the most popular choices. Developing nations do not usually offer subsidized phones so handset cost is the key factor in purchasing decision. Additionally the network infrastructure has not been developed to handle the data requirement of modern smartphone OSs meaning that smartphone use in these countries is usually to access non-connected apps on the move as well as 2G SMSs and voice calls. When at home or at a wireless access points, apps can be downloaded and the web can be browsed.

Given this usage model, Android looks to be in a favorable position to conquer these markets as value models arrive with 1st gen hardware around the $70-100 level in about 18 months time. Given another 18 months, a $50 handset is perfectly feasible without subsidy.


Digital download markets (PC/Mac/Linux)


Over the past few years, there has been a steady rise of digital sales of applications and games at the expense of retail stores. Steam (the PC game download service) and iTunes (the digital music provider) both launched in 2003 and since that time there has been an explosion in both the number of stores and number of media items purchased on these stores.

I should state up front that although I am in favour of digital distribution under the right conditions, currently none of the PC (or console) digital download services operate under conditions that favour the consumer.

For the remainder of the article, I will use the word media. For this I mean specifically audio (such as a music track), video (such as a TV episode or movie at a given quality) or game (such as a collection of files that constitute a game on a given platform). I will also concentrate on the non-console market for now but may write a seperate article on the state of the console markets.

The Trust We Give

There are three areas of trust that are implicit when buying media from an online digital download market:

1 - Trust that the provider can provide a good functional service as promised (can provide the goods on demand).

2 - Trust that we now actually own the media (trust the the provider understands that it holds your media in proxy and that you are the true owner of the media).

3 - Trust (or faith) that the provider can forever until your death continue to provide the service (stay in business).

If you buy a piece of content from a digital market, by all rights, you should own a physical copy. A physical copy is the ultimate proof of ownership. You have it in your hand, and beyond a short physical defect guarantee period, your relationship with the retailer ends. A digital purchase is like buying a book then immediately loaning out the book to the shop you bought it from. You already paid for the book, the book is yours, but it is a conceptual ownership when you do not have the item at hand and in fact, never actually see the book. Trust is everything.

Publishers automatically get money from a market when you buy, at point of purchase, their transaction is complete however the transaction with the consumer is ongoing. We need assurances that our content will not be downgraded and that if the company disappears that we will still have access to those goods which we purchased. Currently no service offers any such assurances (as they cannot).

Too big to fail

Realistically, it *probably* won't happen with the bigger players such as Steam and iTunes as the market seems to be behind them right now, but how can we can never completely know that? They are too big to fail right?

As the number of purchases you make on a service increases so does your risk. Buying 5 games a year for 10 years, that's $3000 of investment you made in one service. Admittedly, the headline price of games quickly erodes and the cost of replacement may be 10 times less than your original purchase price but the price of other media such as music and video can sometimes hold its full value, especially if you are the type of consumer that waits for deals.

If someone broke in your home and took $3000 whilst simultaneously taking $3000 from millions of other consumers that would be the crime of the century, but we are creating a situation where this could happen.

The fear of buying from a smaller marketplace hurts smaller marketplaces disproportionally that may actually have better deals or services. If I could buy a game from Steam for $12 versus Direct2Drive for $10, I would probably go for Steam as it seems more stable and larger. This is not a decision based on good consumer behaviour. Both items would be identical. I just do not trust the second market and with no consumer protections, free market consumers end up rewarding the largest market, not the most competitive.

Direct2Drive has some good deals now now but are we guaranteed they will be there in the future? Ubisoft, EA and dozens of other companies offer digital downloads (often laces with DRM that assumes the existence of their servers which in turn depend of the existence of their company). If these companies cease to exist, then the content disappears. PC owners therefore flock to Steam. In doing so, all the eggs are in one basket and Steam becomes too big to fail.

Another area of worry in these digital markets is a complete lack of cross compatibility between the markets themselves. The markets do not simply become a means of distribution. They tie-in installation and synchronization functions into their own clients.

That is, if I want a game from Steam, I need Steam installed. If I want music or video from Apple, I need iTunes installed. I can't compare oranges with oranges across these markets as there is not a common interface to access all the markets concurrently allowing for small monopolies to emerge around the 'safe' players. What is needed is a open-market standard and open clients that can implement the standard across platforms. Of course, I don't mean to say that I would expect to be able to play an iPhone game or Android game on my PC, but I should be able to have a client on my PC where I can browse all content across all platforms and even make a purchase without resorting to proprietary clients.

Digital Media Rights Repository

When you buy a piece of digital media from a store, you are buying a license to use that media either on an indefinite basis or as a rental for a fixed amount of time. If you bought an indefinite license then that license should last until death of the licensee. Copyright remains with the copyright holder but the licensee should have rights.

This situation cannot continue indefinitely, somethings gotta give, and legislation often starts with a headline failure.

So, how can we ensure that the life of your download media extends beyond the life of the provider that you bought it from?

Well each piece of content needs a unique content id, each user needs a unique id and each license type needs a unique id and the licenses to the content that the users hold should be held in a central digital copyright repository. Each digital distribution service registers purchases with the central repository upon purchase and pays a nominal 'insurance' fee to the repository upon purchase (1 or 2 cents aggregated and paid weekly).

From the moment that you buy the content from your provider, it is insured against provider failure. That is, if Steam goes bankrupt, you can file a claim, and activate another provider for the same content free-of-charge. The replacement-market does not need to pay the publisher for the new user that is downloading their already purchased media and the administrative costs are covered by the nominal per-purchase fee so no-one is out of pocket and customers are free to seek the best deals with protection.


Android versus iPhone, Diversity versus Homogeneity

Just over two years ago in this blog, I laid down the reasons why I felt that Android OS would be a success. I attributed the success down to being hardware agnostic and having a clean set of application APIs. I further stated that it was imperative that Apple allow applications on their iPhone OS before it was too late.

Well, as we now know, Apple announced their App-Store in March 2008 (4 months after the Android OS announcement and my original article). In a little over a year and a half, over 100,000 applications have been approved and listed within the iPhone App-Store.

On October 22nd 2008 the first phone loaded with the Android OS, the HTC Dream was released sporting a hardware keyboard and version 1.0 of Android. In the following year, new phones were announced, slowly at first, the HTC Magic, then the HTC Hero, Samsung Spica, and Motorola Cliq. More recently the second generation phones have either appeared or been announced (Dell Mini 3i, Motorola Droid, Sony Ericsson Xperia X10, Samsung Behold II, and more).

The Android flood has started, and with the deluge of consumer choice, Android OS is finally gaining significant market share in the Smartphone market whilst Apple continues to eat into the market share of the other major players. RIM, Nokia, Palm and Microsoft look set to fight for their very survival.

The remainder of this article is a critique of the features and strategy of iPhone OS versus Android OS in the battle for market share in the Smart phone market.

iPhone best bits

  • First to market with ergonomic and user-friendly multi-touch media player and smart phone combination.

  • The speed of the iPhone is excellent.

  • Excellent on-board browser (no-frills but high quality).

  • Easy to navigate between applications and an easy learning curve.

  • Able to make iTunes purchases directly from the device. Makes easy to find and download new content.

  • Maps/GPS implementation is excellent and you will never get lost again.

  • Excellent 3D and video acceleration hardware inside the device.

  • Excellent choice of over 100,000 (and rising) approved applications in the App Store. Easy to browse and install.

  • One major and one minor OS upgrade per year (usually). New features added for free.

iPhone's shame

  • The physical design of the iPhone between owners is exactly the same. The iPhone (aside from a choice of 2 colours) is completely homogenized and as an owner, there is almost a sense of shame in producing an iPhone out of your pocket when everyone else around you has the very same phone in their pocket. Even if you love what it does, pulling it out of your pocket makes you feel like a sheep and rightly so. More hardware variety is required.

  • No background apps means that only one application can run at a time. Many useful applications are not possible due to this restriction (such as incoming VoIP phone calls, bluetooth location aware services, etc.)

  • Lack of customisation/advanced options in the iPhone OS... The OS treats you as if you were an idiot, which is a great default feature, as most users are not power users and giving users access to advanced features usually results in more support calls. BUT, there are millions of users that are advanced users and need access to advanced settings. These users wish to be able to set the mean time between snooze alarms, they wish to be able to attach multiple images to emails, they wish to have a unified inbox, they wish to be able to specify a six digit pin code instead of the weak 4 digit. Browser lacks many advanced options such as selected cache flushing.

  • Restrictive App-Store terms and conditions means that its not possible for me to run emulators, flash, java, alternative browsers, to run Voice-over-IP software over the 3G network. Apple lets you know at all times, that you do not own your own device and you do not have the right to decide what is allowed on it or not. Using the exceptional hardware to its full potential is purposefully crippled by the OS as mandated by boardroom decisions in Apple HQ. Cannot transfer application purchases between regions. Some applications are locked out of certain regions. Very restrictive and completely unfair.

  • Battery life is still a problem. If using the iPhone for gaming, don't expect to have any batteries left after 80 minutes. Never game for longer than 30 minutes a day if you want your phone to be able to actually be used as a phone. The more I think about it, I love online distribution system of the iPhone for gaming, but I would prefer to have an iPhone touch for gaming as there is no risk of running your phone batteries down.

  • No hardware back button means that when an app links to a webpage, your state in the referring app is lost and you have to click the menu button, find the originating app, and find the item that linked to the webpage in order to continue. Its terribly unergonomic.

  • iTunes integration is still tied to the iPhone hardware. Makes it problematic to move to a different phone type without losing content or without having very good IT skills.

Android's Best Bits

  • A large choice of handsets that is set to explode in the coming year. Cheaper handsets with slow CPUs, luxury handsets with fast CPUs, a variety of styles, colours, keyboard, no keyboard, capacitive touch screens, resistive touch screens, low res screens, high-res screens.

  • Customisable look and feel means that each manufacturer can have their own unique feel whilst building on a secure infrastructure that has access to thousands of application as standard.

  • Excellent set of standard OS features including turn by turn GPS navigation for free (Android 1.6+).

  • Excellent integrated browser (HTML5 compliant). Customisable and alternative browsers can be installed by the user if they desire (such as Firefox) .

  • Exchange integration, multiple email accounts and email account unification.

  • All types of applications are supported including virtual machines and services that overlap the core OS services (user has a choice of what he/she wants to use).

  • Android Market place has thousands of free and paid-for applications.

  • Support for background applications.

  • Widgets supported -customize the look of your phone. Provide convenient short cuts to system functions in a way that suits your usage pattern.

  • Android OS upgrades provided on a regular basis.

Android's shame

  • Version 1.0 of Android did not support on-screen keyboards so we were limited to the HTC Dream only for the first 6 months of Android's life. It wasn't until recently that an iPhone contender appeared from a hardware point of view (the Motorola Droid) and most existing handsets fit into the 1st generation camp right now.

  • The iPhone OS is still conceptually easier to use and easier to understand (at the expense of advanced features). Android still has some ergonomic hiccups to address such as the clunky pull-up menus and lack of pinch zoom on many handsets.

  • Android does not currently attract game development due to unoptimised libraries and a diverse set of hardware that makes it difficult for developers to target and test on all platforms.

  • The application store is currently less profitable for developers due to the much larger install base with iPhones and possibly for demographic reasons.

  • The first Android devices suffered from slow and clunky operation. This was a combination of the early versions of Android along with Anaemic CPUs (528MHz ARM11 CPUs) that simply could not compete with the fast and responsive iPhone.
  • Some early devices remain 'clunky'. Some of this may have to do with the unoptimised Dalvik virtual machine.

  • The Dalvik virtual machine has been and continues to be a huge disappointment in terms of performance but does provide an excellent sandbox that is required for application to leverage the Android platform. I would be extremely surprised if there were not some major optimisations in store (such as JIT) for Dalvik in the next year but this a key area for improvement in the Android infrastructure.

Looking ahead

Apple is far ahead in the lead right now but Apple has a lot to consider if it wishes to maintain its lead.

By this time next year, there will likely be scores of 2nd and 3rd generation Android phones available, all with usable, ergonomic user interfaces and all phones able to leverage a huge libraries of custom applications. These phones will come in a huge range of form factors, specification, some with keyboard, some without keyboard, some hip, some corporate, some suiting games, some optimised for battery life. Android OS is designed to fit on a variety of hardwares and therefore has the ability to fit inside every niche. Applications will be portable across devices but each device will be personalized and unique. The Android OS will continue to accrue new features and new optimisations and with Google as its sponsor, its sales momentum looks set to accelerate.

The iPhone will continue to be a dominant force in the next year mainly down to their superior hardware and excellent user experience (at some expense of flexibility). But the hardware advantage is fast disappearing. The Motorola Droid goes toe-to-toe with the 3GS in terms of performance, and there are phones based on the snapdragon chipset that should outperform the 3GS significantly.

But performance is not the battleground. The battleground is choice and features.

Some potential iPhone customers have already been put off by the lack of physical keyboard on the iPhone and have either chosen a different phone or are waiting until a phone appears that matches their aspirations.

So, what will Apple do now? Will their next phone have a physical keyboard, or not have a physical keyboard? Will it have curved edges or well defined lines? Will it come in white or in black or in red or in blue? Will it have a WVGA screen (as with the Droid) or a HVGA screen as with the current models? Extrapolating just these few options leads to problems.

How will Apple fill the niches that the market demands? Will they produce a series of iPhones (keyboard, non-keyboard, basic, hi-res, gaming edition, etc.)?

The only possible defence that Apple has from the coming Android onslaught is licensing the iPhone OS and using their current momentum to beat Android OS at its own game.

I predict that Apple will either announce multiple new iPhone models this year (beyond the simple capacity differences we have seen the previous 3 years) OR that Apple will announce the licensing of the iPhone OS to third parties (as well as continue with its own hardware development). The former would help Apple in the short term but it is the latter that is the only chance for Apple to maintain its momentum in the long-term.

Watching and discussing the mobile OS market is a most enjoyable hobby and I look forward to watching the story unfold over the coming years.


Chrome OS first impressions

So, Chrome OS has finally been revealed, at least in concept.

What we know

* Fast boot-up times (7 seconds at this point).
* Only a sub-set of hardware will be supported and it requires a solid-state drive.
* Trusted computing model. User has no permissions to change anything and all system modules are signed and verified meaning in theory, no viruses and no need for virus-checkers.
* The Operating system is the browser. You boot into a modified version of the Chrome OS browser. No other native applications are supported.
* Want an application, make it a web-app. This platform is strongly pushing HTML5.
* All data is stored 'on the cloud'. That means that if your life belongs inside Chrome OS, it doesn't matter which machine you use as all your files are on the Internet. Your computer is ... insert pause ... 'Stateless'.
* USB thumb drives are supported but hard-drives are not.

What we don't know

* Will all your earthly data be protected by a single cloud password (assuming you have a preference for the Google services)? Will Google choose to beef up their security before launching this? I do hope so.
* How will this device function with media stored on an external hard-drive or thumb-drive? The host hardware will easily be able to deal with all content SD/HD by the end of 2010 so how will the media playback work. All in-browser?
* Which hardware will run the OS and what price point will it weigh in at?

What I like

* Trusted computing is great. A platform where viruses and virus checkers are a thing of the past. I think this is the most appealing part of the entire platform.
* Fast Booting is fantastic.
* HTML5 applications are great. But that is not unique to Chrome OS.
* No local storage (other than boot flash) means that netbooks based on the Chrome OS specs will be another $50 cheaper. Cheaper is good.
* I like the idea of a dumbed down computer. More options invariably lead to more problems for an average user. So take away the big red buttons that will invariably lead to confusion or failure. Nice work.

What I don't like - Data Security

Google wants all data in the cloud. They want you to sell you the benefit of implicit and automatic data backup and redundancy and convenience in exchange for your trust.

Actually, I personally do trust Google, more than most companies. I trust Google to keep my data safe, and I trust them to run their intelligent algorithms at a high level to act as the clearing house between customer desires and the ability to match vendors with those desires. I trust in this because they know that their business is dead once users start viewing them as evil so they must make their best efforts to never be evil. I have no problem with that although I accept there are plenty of good arguments on the other side - but that is for another time.

My problem is that we are not ready for users storing all of their worldly information in the cloud until data on the cloud is safe from social engineering attacks. User names and passwords, no matter how strong the passwords, can be extracted from users in double digit percentiles. [1] [2]

I would personally love to store more information in the cloud if I trusted it. But I don't, and I don't trust my password. I don't like the fact that I need to embed my password in my phone in order to be able to check mail from my phone. My whole life is in my email account and the gatekeeper to this smoking gun is a single sequence of characters. No.

The web needs to fix the scourge of passwords via the option of using something akin to SecureID for users that request it. I would pay to use such a device with my Google services.

What I don't like - Lack of Applications

An operating system is a broker between software and the hardware. As hardware becomes more powerful and adds new features, the operating system must expose that hardware to the applications that sit on the platform so that those features can be utilized.

CPUs are hugely powerful compared to 10 years ago, even in the netbook segment of the market. They can outperform every smartphone available and even the weakest has hardware acceleration of 2d and 3d graphics not to mention a variety of network features that were unthinkable 10 years ago.

The browser simply is too thick a layer to act as liaison between web applications and the underlying hardware. Many features of PC hardware are just not exposed at all via HTML5. The V8 JavaScript Engine is a stellar piece of work. It translates an implicitly slow language (javascript) into a relatively fast runtime by dynamic generation of machine code from the original script. I understand why Google has been pushing so hard to improve JavaScript performance in the browser. BUT, the JavaScript is still in a hugely crippled sandbox, not a problem if the browser is not the OS, but a huge problem when the browser becomes your whole world.

You still need approx 10x more horsepower to run a piece of complex code in a browser as opposed to natively. Dynamically compiled javascript cannot hope to compete in performance terms with native machine code or compiled C code.

An application such as Skype requires access to audio-in, audio-out, bluetooth headsets and requires its own protocols for sending and receiving voice data packets that are optimized for its purpose (VoIP applications need fine-grained control over network packets and need priority over other network requests on the same computer). Chrome-OS requires that Skype be a web-application. Just for the sub-set of computers that will run Chrome-OS. The Skype VoIP algorithm when translated into Javascript is likely to run several times slower and the browser would not allow control of the audio-in, audio-out, bluetooth controllers and fine-grained network connectivity from within the JavaScript sandbox of a HTML page.

We all love browsers, but browsers are not the only applications that are required to run on a PC. Regular users require applications such as Skype, iTunes, video editors, photo editors, etc. These could be signed and run outside of the browser, but as presented, everything must be within the browser. This is a major flaw in the OS and although I can live without almost all of these on my netbook as opposed to my desktop, I cannot live without my VoIP clients on my netbook (as they are perfect for netbooks as already 'Stateless'. So, Skype better get re-engineering, and if this is the way the web is heading, then we better get a better virtual machine for browsers.

What I don't like - Too many tabs

Now that the browser is king, the application taskbar has now been replaced by the browser tab-list. At this point it seems a little cluttered and unergonomic. I wonder if there will be options for cleaning up the UI.

What next?

Google need to work on allowing some 'native' applications on the platform. HTML5 is great but it was not meant as sandbox for the complete replacement of desktop applications. It is far too crippled for that - rightly so in its own context. Wrongly so in the context of an OS.

Without at least VoIP applications I will not be using Chrome-OS for my secondary machine, and without an answer to weak password security exploits I will not be tempted to put my entire set of personal documents on the Internet.

So, those are my first impressions. I'll revisit this in a year.

Be seeing you.


Online Data Backup, the long wait for GDrive

Back in 2003, I thought I had a bright idea.

I had been taking ever increasing amounts of digital photos without hard-copies. Also - I had been storing more and more documents on my computer and I started to worry about 'the fire' scenario. EVEN IF - I backed up regularly, all of my most valuable digital documents and memories would disappear in an instant.

I also hated the process of performing a manual backup. It was tedious and time consuming and at the time, external 2.5 inch hard drives were too expensive, as were flash drives, so backups were performed to CDs or DVDs. I was paranoid about the quality of the media, so often I would double up the backup, then when I got some new files in, I had to remember the differences between the files.

Which led me to my idea. The idea was to transparently mirror one or more of my folders to a server on the Internet. All files would be transparently streamed in my unused upload bandwidth and encrypted for transport. The server would instantly distribute the data to multiple physical locations and the data would be safe if there was a simultaneous fire or disaster in n-1 of the locations. The software would be a simple install and invisible in normal use but transparent with what it is sending and receiving and which files are currently pending distribution.

The business model would be to charge a subscription fee for a yearly backup based on the amount of data stored.

The theory was that as more and more people started carrying digital cameras and digital video cameras, the amount of data to be backed up would increase exponentially. My pitch was "Digital Insurance". I was then, as I am now, sure that eventually the majority of users of the Internet would have one form of digital insurance or another. To not have would be akin to not having house insurance. To risk all of your irreplaceable memories to save a nominal yearly cost with zero complexity is simply not rational once the adoption hits the tipping point.

There was one problem, I didn't own a huge server farm and I couldn't find a way to make the idea work without a huge investment in capital which frankly I had no experience in collecting. The client and server software were relatively trivial to build but it was the infrastructure that was the sticking point. Eventually, I gave up the idea as I felt that one day someone would implement the exact same service for me (albeit without me becoming a massive multi-billionaire for virtue of having the idea).

At the time of giving up, I strongly felt that the best fit for this kind of service would be Google due to their infrastructure. Years passed and a few services have appeared offering online backup in a way that is very similar to my original idea. Carbonite appeared and there are others, but Google remains quiet. Carbonite is a strange service. It only backs up data on internal drives and will not backup from external USB drives. I store a lot of data on external USB drives so it immediately excludes me as a customer.

I heard rumours about something called GDrive a few years back and I was sure Google was going to announce their version of an online backup service. But no news. Nothing.

What brought this all back to me was that they made the announcement today that 20GB of storage on the Google service is now just $5 a year and 80GB is just $20 a year.

Google Storage Upgrade

Unfortunately, this is just a file container for pictures and emails, not a synchronized network/local drive.

The cost is finally in the ballpark of what an online backup system should be charging. If the infrastructure is finally available to store data in these volumes then GDrive cannot be far away. I suspect the software will treat the online copy of photos, documents as the master, and will allow users to stream files directly from the network drive from any computer with any computer being able to add files to the drive (given the right login credentials).

The service must negate the possibility of user-error when asking users to store all of their personal data in the cloud.

It is vitally/critically/of the paramount importance that customer not forget their password or lose their usb-thumbkey authentication as any password replacement policy implies that the photos are decryptable from the server side which should not be tolerated.

Key management is the hardest part of the entire service and it will be interesting to find out how Google intends to deal with this.

If the data can be decrypted on the serverside, without your key phrase/password/thumb-stick/whatever then that means that all your private documents and photos are readable to Google and possibly to others online. Only if your documents and photos are entirely scrambled on the serverside will users adopt the service. Security and key management are paramount.

Possibly, a copy of the key or password should be entrusted to a third party. If a thumb-key is required, then a backup of the key in a separate physical location is also required otherwise there is still fire-risk to the data as if the key is lost and if google does not have a copy of the key then you lose your physical copy and your online copy (as its scrambled). This is a real sticking point in my mind, and possibly one of the reasons this service is taking so long. Username and password authentication is too weak for this service, but this service is unique in that all the data on the server MUST be scrambled meaning that if a user forgets their password or security information, the user potentially loses everything which would hit the news and would hit profits.

To try to round off this ramble, online backup has been a long time coming but the technology is ready now. The pieces are in place and I look forward to being able to forget about backup and to be able to access my data from anywhere in a secure way.

Could GDrive finally be arriving soon? $20 a year for a backup of all my precious memories? Yes please.


HTML5 Canvas / GWT woes

I've been working on a pet project recently where I wish to control an area of the browser screen via DOM manipulation entirely on a pixel by pixel basis (and to capture all keyboard and mouse events on the area).

I investigated and found that the canvas tag seemed to match the requirements. Using this HTML-5 tag, it is possible to allocate an area of the screen for dynamic updates via Javascript.

Description of the CANVAS tag
Canvas emulation for IE6/7/8 (non-GWT)

As I'm using GWT for my Client-Side scripting, I dug out the GWT Canvas library. The initial trials were good but I soon found that there were no text rendering functions exposed via the library.

It appears I had been experimenting on the cutting edge of the HTML-5 specification. The text functions had recently been added to the spec and only recently added to the beta of 3.5 of Firefox.

Internet Explorer does not support the Canvas at all, but the GWT Canvas library emulates the Canvas via Microsoft's VML interface (based on the standalone javascript ExplorerCanvas library). This does not include text functions.

I checked out the code for GWT Canvas and modified it to expose the text functions via JSNI. This seems trivial at first glance, and it would have been if it were not for the fact that the GWT development kit uses the integrated SWT browser (which in the case of Microsoft Windows is 'IE'). This means that even though I exposed the text rendering functions, they were not available in the test environment as they were not supported in IE. For testing purposes, I wanted to test against Firefox but I could find no option for switching the integrated Host browser in the GWT Eclipse plugin. I understand that if I run on Linux I will be able to test against Firefox but that is not feasable right now.

If I compile the GWT and launch externally in Firefox then I can see that the text functions work as expected. I just cannot see this in my development environment.

I decided to fall back to a pure HTML-4 based alternative in the interim and hope that IE adds Canvas support soon and/or that the GWT Eclipse Plugin will add the ability to switch the host browser in future.


GWT - My love for

Recently I have been dipping my toes in the waters of Google Web Toolkit . GWT is a framework for writing client-side Web 2.0 style applications without having to have write any JavaScript, HTML or without having to think about any browser-specific quirks. All code is written in Java and translated into JavaScript at compile-time (generated pages have no dependencies on Java at all).

GWT makes it simple to separate client-side logic and server-side logic. In GWT, pages are built from Panels and Components/Widgets. Pages are build programatically but are rendered using standard CSS layouts.

GWT Ext is a library that I can recommend wholeheartedly as its very easy to build a professional looking front-end using this extension.

What was interesting for me was not having to think of a website in terms of text generation as has been normal up until recently. Mixing code and layout has always been a mugs game. For this reason, I had used Wicket recently which takes a similar component based approach but the client is a slave to the server. E.g. too many round trips.

GWT turns the world on its head and the server is a slave to the client. The client pre-loads the whole GUI once and (in my recent application) never reloads. The client submits Asynchronous JavaScript requests to the server from time to time to gather server-side state or to confirm permissions but upon loading, the client loads its state model from the server and from then on, it very infrequently needs information from the server.

All code is written in Java. Server side and client side. The server side is just a extension of a regular Servlet that implements a specialized RPC interface. Clients can call server methods via a static RPC object with an asynchronous callback.

Clients are written in the same method as Swing applications. Anyone familiar with Swing or SWT will feel at home with GWT... Most of the common Java libraries can be used in GWT but not all. Where some are missing it is usually for obvious reasons such as the ability to perform the equivalent function is not available inside a browser JavaScript environment.

All client side code is translated into optimized and obfuscated JavaScript at compile time. Unused methods from referenced classes and objects are not included and long variable names become obfuscated short variable names. The JavaScript is embedded in a HTML file and referenced images are packed in a single image pack for speed of loading.

Best of all, the client and server side together can be step debugged in Eclipse or other IDEs. The client side code can be step debugged line by line in Java before ever touching the browser. The author benefits from strict type-checking and compile-time error checking.By the time the JavaScript is compiled, there are no syntax errors or typos. In fact, I have never needed to debug any JavaScript in my past 2 months with GWT.

My particular coding style in GWT is to build an application model. A tree hierarchy of my application schema that semi-relates to the serverside xml configuration of the application. Upon the client loading the page the first time, the client makes a request to the server to get a copy of the model locally. From the time the client has the model locally, very little interaction with the server is required. The client can render its own screens, perform its own checks and populate its own tooltips etc.

Session validation is an area that is a little under-documented in GWT, but via use of an authentication Servlet, you can supply clients with different models depending on their access rights.

GWT is a great toolkit for small-medium sized applications. I have not had chance to test it on larger style applications and it very much depends on the implementation of the RPC mechanism but I cannot and will not go back to direct HTML manipulation when creating websites.

Latency and round-trips are bastards but as someone based in Japan who primarily reads European or American websites, I know how much of a bastard it can be. The web exacerbates poor latency by a website referencing 10s and sometimes 100s of other resources from a single page. Every image, every stylesheet, every referenced JavaScript library results in another round trip with a browser only having 2 connection threads active at one time. It may take milliseconds to serve the content but every round trip has a latency cost. As such, many pages can take up to 10 seconds to load, even on a 100 Mbit internet connection.

GWT goes a long way to fixing roundtrip performance issues for browser-based applications without having to think too much about how. The majority of a site is bundled when you hit the first page and distributed sites will load faster in this scenario.

How much you love GWT depends on how much Java experience you have to leverage and how much you have invested in other similar technologies. As someone who has 10 years of Java experience and as someone who found the alternatives to be clumsy and human-error-prone, I love it. It just works. And works well.

Google yet again prove themselves masters of making complex things simple. As they should be.