Cloud Computing started from the cloud based processing then expanded steps to storage and now comes the third contender: Graphics. Now, Nvidia will power cloud-based GPU rendering which can be deployed as server-farms.
Nvidia looks to be bringing 3D rendering to the cloud with the news it is to partner with rendering expert Mental Images to launch its RealityServer platform. The pair are hoping to develop systems for offering web-based 3D rendering via Nvidia’s high-end Tesla RS GPUs.
As per the plan, On Nov. 30, Nvidia will launch the RealityServer platform, featuring a hardware server based on the Fermi architecture with a minimum of eight GPUs, scaling to 32 and even 100. A developer edition of RealityServer 3.0 software will be downloadable free of charge, including the right to deploy non-commercial applications.
RealityServer is designed around CUDA general purpose GPU technology and features 240 individually addressable cores. The technology is quiet scalable considering the fact that a single implementation can feature a 100 or more Tesla GPUs capable of simultaneously rendering for thousands of users. This will allow for the creation of extremely realistic objects via a simple user interface – and can calculate complex graphics like shadows, anti-Aliasing, anisotropy and reflections swiftly.
The idea is quiet similar to what AMD had with it’s Fusion Cloud. Although, fusion is more aimed at allowing games to be sold on a SaaS model than for professional graphics creation.
The purpose behind RealityServer is fairly simple: change the way Graphics are produced (rendered) today. Most PC users are used to their PC rendering 3D graphics via a GPU. Animation/Movie studios and product development, faced with the task of rendering even more complex scenes, use a series of locally-networked servers or workstations known as a rendering farm.
RealityServer intends on taking the concept one step further by isolating those servers away from the user, across the Internet, in a cloud configuration.
If you wonder is it even practical? Here is a solid slap — Google Earth uses a similar technique, caching a small portion of data on the user’s PC, then streaming additional data over a broadband connection as the user moves over the virtual map.
It does have one weakness, (as per Nvidia): The geometry of the virtual world is calculated using the local graphics engine, making impossible for some mobile devices from leveraging it.
Consider this: If the graphics are too simple, workstations and a user’s own 3D cards would make the RealityServer irrelevant. But the latency and bandwidth limitations imposed on cloud-based rendering would make rendering realtime video nearly impossible.
As a compromise, the RealityServer will render complex static images using ray tracing, a graphical technique that models the flight of individual photons as they reflect and refract across various surfaces and textures.
It will be up to customers, however, to provide the front-end software and applications to take advantage of the technology, such as a car manufacturer which uses the RealityEngine to render the latest roadster, using the software to dynamically change the color.
It has lot of potential, all we need is the right kind of application that could integrate client and the Cloud in a phenomenal way.
Next is what? Cloud OS: Chrome OS
What do you think ? Is it all worth ?
Magic of Cloud computing: VMware and RDP 6 client: Run Windows 7, Flash on iPhone – Wyse PocketCloud