Android Apps

5.2.1 Motivation

The principles of cloud computing are largely orthogonal to the Android system. In Android, the availability and quality of the network connection does not totally dictate the user experience. Android devices try to maintain as much usability as possible when a data network is unavailable. Android apps are normally installed on physical devices, a stateful model. As such, each android device is personalized and maintained, usually by the owner.

The apps of cloud based systems are normally loaded dynamically via the network on demand, and not normally persistently installed on the device. After the current session terminates, the state is reinitialized. No state is saved between sessions. The advantage is that all cloud-based devices are in a sense equivalent and non-personalized: anyone can sit at any cloud device, login and work in their personal environment. The system is maintained professionally and centrally. In addition, there are many security advantages to cloud-based systems.

These two computer paradigms, stateful and stateless, are difficult to harmonize. A good example of two systems that do not interoperate well are Google’s:

    • Android - Mobile OS
    • ChromeOS - Cloud OS

Integrating ChromeOS apps on Android is easy since the Chrome browser runs under Android. However, until now, running Android apps under ChromeOS was an unsolved problem, aka the “convergence” problem of Chrome OS and Android.

Ascender’s remote rendering technology can effectively “converge” Chrome OS and Android. Android apps can be run in the cloud and the graphics can be efficiently “exported” using Ascender’s enabling technology.

5.2.2 Technical Challenges

The number of rendering API’s supported by Android is large and varied. Some are now enumerated:

➀ OpenGLRender.cpp If hardware renderer is enabled, the Canvas.java class uses the hardware renderer as shown in the somewhat simplified view in Fig. 8↓. The C++ class largely retains the Skia-like API of Canvas.java and has a similar API to the software renderer Canvas.cpp (➃) and the Skia library (➂).

➁ OpenGL ES 2.0 The lower level of the hardware rendering stack is the 3D rendering standard managed by the non-profit technology consortium Khronos Group. This rendering API is used for Android’s GUI when the hardware rendering path (the right path) is taken.

➂ Skia_Rendering Library The Skia library is used for the software rendering of pixels.

➃ Canvas.cpp This is the software renderer that the Canvas.java class uses when the software rendering path (the left path) is taken.

➄ OpenGL ES 1.x There are programs that use this earlier incompatible version of OpenGL ES.

figure rendering.png

Figure 8 The Android Graphics Stack

The reason that all these rendering API’s need to be considered relates to the “coverage” of the large number of apps within the apps stores. The more rendering API’s that are covered, the greater the coverage of the apps within the stores.

For example: an Android 3D game might use the OpenGL ES 2.0 API as a NDK (Native Development Kit), as does Imangi’s Temple Run 2, which uses the Unity3d game engine. The rendering API used is OpenGL ES 2.0.

Ascender implemented remote graphics and tested the 5 graphical rendering API’s mentioned previously. Support for these API’s will cover a large number of the Android apps in large repositories such as Google Play. Figure 9↓ shows the coverage of these five graphical rendering APIs mentioned previously. This figure shows the coverage of these five API’s of an app repository. This figure is a Venn diagram for a finite collection of sets, ➀, ➁, ➂, ➃, ➄ and ➅. Some comments on this figure are appropriate.

    • The ➀ (OpenGLRender.cpp) renderer API is completely covered by the ➁ (OpenGL ES 2.0) API. This means that support for the ➀ renderer is not critical. On the other hand, the bandwidth that this renderer uses is less than the bandwidth of ➁, so it pays to implement it for efficiency reasons. ➀ should be used instead of ➁ if possible.
    • NDK apps that directly use ➁ are in subset ➁ − ➀.
    • NDK apps that use OpenGL ES 1.x are in the set ➄.
    • The figure 9↓ is somewhat simplified. For example: there are applications that use both ➁ OpenGL ES 2.0 and ➄ OpenGL ES 1.x. Thus ➁∩➄ ≠ ∅, not as drawn in the figure.
    • Apps that are in Google Play and not supported for remote rendering, via the implemented API’s, are in the subset➅ - (➀ ∪ ➁ ∪ ➂ ∪➃ ∪ ➄).

5.2.3 Standard Android UI Programs

The majority of Android apps are written in the Android UI toolkit in Java. Simple examples are the standard contact manager or the settings manager in Android. In the case of hardware rendering, the API stream generated is a number of times more compact than the OpenGL ES 2.0 stream. Each rendering command is typically compressed into 2-4 bits. Even at 60 fps, the rendering stream is typically less than 20 KBytes/sec.

The reason compression is so effective on the OpenGLRender.cc stream is that the objects sent in the compressed rendering stream are entries from the synchronized “routine dictionary” on the remote encoder and the local decoder. These routines map to higher level objects from the application and toolkit levels that contain many renderer commands. Effectively, dynamic analysis of the rendering stream provides a reverse engineering of the graphical routines of the application and toolkit.

In practice, even zlib compression applied to the rendering stream will provide excellent compression and consume a small amount of resources.

figure galaxy.png

Figure 9 A Galaxy of a Million Stars (Google Play)

5.2.4 Non Graphical API’s

The Android system has not been designed for remote execution. In order to run an Android application remotely, system services must be exported from, or imported to, the remote server. Such services include:

    • Camera driver
    • Audio driver
    • Keypad driver
    • Touchscreen driver
    • Location manager
    • Graphics subsystem

For example, audio output might be exported from the remote server to the local client. Audio input might be imported to the remote server from the local client. For spatially separated devices, the location manager might reside on the local client and import this service to the remote server.

Interaction with these services possibly will incur round trip latencies. Thus, for the touchscreen services, the latency between the “touch” and the graphical interaction is at least a round trip delay.