FAQ

Is this only an Android mobile technology?

No. Ascender’s technology splits the execution of Android apps between a remote and local side. Either or both of the remote or local devices might be mobile or fixed.

One use case has the remote app executing in the cloud and the local device a fixed desktop machine. Some use cases might have both the remote and local devices mobile. Other uses cases can be a mixture of mobile and fixed.

Is the local viewer just Linux?

No, the local viewer is not tied to any specific system. It can run under Linux, Windows X86, Widows RT, IOS or a WebGL browser.

Is the remote server an Android system?

Not necessarily. The remote server can be a virtualized or emulated Android system running on a non-Linux host.

Do the remote and local systems need to have the same CPU architecture?

No, the CPU architecture of the remote and local side need not match. Thus the remote system might be an ARM system and the local system might be an X86 system.

How does Ascender’s remote graphics technology differ from technologies currently used?

Ascender’s approach performs the rendering of pixels only on the local client, which makes for a much more affordable solution without expensive graphical hardware in the cloud. In addition, Ascender’s compression techniques reduce the networking overhead, typically by over an order of magnitude.

Currently most modern remote graphics is rendered in the cloud and pixel-based video compression techniques are used for image transmission. These solutions perform poorly, profligately expending both system and network resources.

What challenge does Ascender’s enabling technology address?

The challenge of modern graphical rendering systems is to deliver higher quality visual experience: frames are generated at a very fast rate (~60 fps) and a complete re-rendering is performed for each frame. In addition, the size and density of displays have grown over the previous generation, greatly increasing the number of pixels generated for each frame. These combined changes in technology challenge the prevailing methods of providing remote graphics. In order to meet networking bandwidth constraints, remote pixel based-methods typically are forced into compromises in:

    • Latency
    • Resolution
    • Image Quality
    • Frame Rate

How does Ascender’s approach use fewer resources?

Our approach is based on the observation that these days virtually all computing devices with graphics capability include a capable GPU or can render 2D graphics in software. Even smart TVs are beginning to support WebGL. Our approach sends the compressed rendering API stream rather than the compressed pixel stream, thus obtaining a number of efficiencies

    1. No GPUs are needed in the cloud.
    2. The network bandwidth is reduced by about an order of magnitude.
    3. The compression algorithm is lossless. There are no compression artifacts or fuzziness.
    4. Higher frame rates than H264 are easily supported.
    5. The resources used are largely insensitive to screen resolution.
    6. The compression codec introduces very low latency into the graphics playback as opposed to MPEG based codecs such as H264.

If it’s good enough for Netflix, is it good enough for me?

Netflix is streaming mostly photographically generated material. This is the domain of the MPEG family of video codecs. The domain of programatically (applications, gaming) generated graphic material is a much smaller subset of image streams which can be compressed more efficiently than using video codecs. The approach that we have taken is to send the higher level rendering stream which is intrinsically smaller than the pixel stream and can be easily compressed. Typically the network bandwidth used is 5-10% of H264 compression.

It should be noted that just two websites using video codecs, Netflix and YouTube account for about half of the peak North America internet usage. Using video codecs for cloud gaming and remote apps has the potential to eat up a large amount of Internet capacity with corresponding expense. Optimizing network usage is important to control costs.

Is this about cloud gaming?

Yes, but not primarily. The scope of Ascender’s technology is broader than just cloud gaming or Android apps. There are many use cases described in the whitepaper.

Where are the GPUs?

The predominant way of providing modern remote graphics is by rendering the pixels in the cloud. The application is executed in the cloud and the pixels are GPU-rendered in the cloud. The stream of pixel frames is compressed with a H264 codec and sent to the remote client that decode the video stream and displays the graphics. This creates a need for a large number of GPUs in the cloud which GPU manufacturers have fulfilled by providing arrays of GPUs, such as the Nvidia Grid and AMD’s Radeon Sky. They are used to both render and compress the graphics stream. Needless to say, the need to have GPU hardware in the cloud increases costs greatly.

Ascender’s approach needs no GPUs in the cloud, but uses the GPU of the local device.

How can Ascender’s technology enable the convergence of Chrome OS with Android?

Ascender’s remote rendering technology can effectively “converge” Chrome OS and Android. Android apps can be run in the cloud and the graphics can be efficiently “exported” using Ascender’s enabling technology.

The principles of cloud computing are largely orthogonal to the Android system.

In Android, the availability and quality of the network connection does not totally dictate the user experience. Android devices try to maintain as much usability as possible when a data network is unavailable. Android apps are normally installed on physical devices, a stateful model. As such, each android device is personalized and maintained, usually by the owner.

The apps of cloud based systems, such as Chrome OS, are normally loaded dynamically via the network on demand, and not normally persistently installed on the device. After the current session terminates, the state is reinitialized. No state is saved between sessions. The advantage is that all cloud-based devices are in a sense equivalent and non-personalized: anyone can sit at any cloud device, login and work in their personal environment. The system is maintained professionally and centrally. In addition, there are many security advantages to cloud-based systems.

These two computer paradigms, stateful and stateless, are difficult to harmonize. Integrating ChromeOS apps on Android is relatively easy since the Chrome browser runs under Android. However, until now, running Android apps under ChromeOS was an unsolved problem, aka the “convergence” problem of Chrome OS and Android.

Ascender’s remote rendering technology can effectively “converge” Chrome OS and Android. Android apps can be run in the cloud and the graphics can be efficiently “exported” using Ascender’s enabling technology.

Is this an OpenGL rendering technology?

Yes, but not only. This technology is general and deals with many different rendering APIs. The emphasis has been on Android technology and five different APIs have been tested.

    1. Hardware Renderer - OpenGLRenderer.cpp
    2. OpenGL ES 2.0
    3. Software Renderer - Canvas.cpp
    4. Skia
    5. OpenGL ES 1.X

The reason for supporting so many APIs is to cover as many unmodified apps as possible. See section 5.2 of thewhitepaper.

Isn’t the WebGL demo just a video being played in the browser?

No, the demo is not generated by a compressed video stream. It is rendered from a compressed rendering API (in this case OpenGL ES 2.0) stream. WebGL is used to render the OpenGL stream. A technical explaination and non-technicalexplaination can be found on our site.

Why doesn’t the WebGL demo work in browsers other than Firefox?

The WebGL demo is a straightforward translation of our C language OpenGl viewer. It uses the awesome Mozilla enscriptenC/C++ to JavaScript compiler. The code generated is pure JavaScript but currently works only in the FireFox browser.

What about non-graphical APIs?

The main technological challenge that has been addressed is providing remote graphics. There are many other APIs that have to be supported:

    • Audio
    • Camera
    • Keypad
    • Touchscreen
    • Location Manager

For example, audio output might be exported from the remote server to the local client. Audio input might be imported to the remote server from the local client. For spatially separated devices, the location manager might reside on the local client and import this service to the remote server.

Interaction with these services possibly will incur round trip latencies. Thus, for the touchscreen services, the latency between the “touch” and the graphical interaction is at least a round trip delay.

How do existing applications need to be modified to be hosted?

They don’t. The idea is to support as many apps from Google Play unmodified as possible. That is why I want to support as many rendering APIs as possible. Figure 9 of the whitepaper describes the "coverage" of the 1M app in Google Play by our technology.

What use case is addressed by this technology?

This technology is enabling. It covers many different use cases. Some use cases described in the whitepaper are:

    1. Cloud gaming
    2. Android apps
    3. Mobile devices
    4. App library
    5. App demo before purchase
    6. Remote enterprise applications
    7. Set-top boxes
    8. Automated testing
    9. WebGL browser implementations

However, the list is not exhaustive and as we speak to professionals in different industries, more use cases are cited, and the list grows.

Must the remote app be just Android?

No, the principles behind this technology are probably valid for any modern graphics system. Android is a convenient target since it is open, can be understood, modified and has a large market share.