The Wave Glider autonomous robot gathers data for a variety of clients and users. New GPU technology from Nvidia has helped Liquid Robotics increase productivity and the speed of innovation.
Oceans comprise 72% of the Earth’s surface, yet more contemporary exploration has been done in outer space than in or on the oceans. There is a strong need for better ocean data, for a broad variety of applications—fisheries, seismic monitoring, water quality, weather forecasting, and designing tidal energy systems, to name just a few. Liquid Robotics is developing an innovative ocean-going robot it hopes will change how ocean data is gathered. The Wave Glider can skim the surface with a variety of sensors, constantly gathering data.
Traditionally, oceanic observation has required some combination of ships, satellites, and buoys, with their challenges of being expensive, hard to manage, unreliable, or difficult to power at sea. The surfboard-sized Wave Glider is a solar- and wave-powered autonomous ocean robot, designed to be a more cost-effective way to gather ocean data. In creating the Wave Glider, Liquid Robotics has taken enormous design pressure off organizations that require ocean-based sensing data. Instead, the design pressure now rests with Liquid Robotics, both for integrating customer-specific sensor payloads and for continuing to enhance the performance and capabilities of the Wave Glider itself.
In search of a better engineering workflow
As design began on the revolutionary Wave Glider, the engineering team at Liquid Robotics realized their design workflow needed some revolutionizing of its own. “The key is that the Wave Glider is persistent, meaning it can operate continuously, without intervention, for months and a year at a time,” says Tim Ong, VP for mechanical engineering at Liquid Robotics. “We can integrate scientific, governmental, or commercial sensors onto the Wave Glider platform and put it on the ocean to act as either a virtual buoy or a vehicle, to take and transmit sensor information.”
Liquid Robotics engineers use a number of software programs—including Dassault Systèmes SolidWorks, ANSYS, MathWorks MATLAB, and various proprietary codes—to design, test, simulate, and render complex mechanical designs such as structural assembly or computational fluid dynamics.
In the past, doing simulation or rendering required the complete computational power of their systems. “If you wanted to do anything else while running a simulation or modeling, you were out of luck,” says Ong. “You either got a cup of coffee or worked on something in the shop once the computer was using all its processing power running one of these programs.”
Often, engineers would wait until the end of the day to set up simulation models. “We’d turn them on and leave the office and check them the next day, or we’d send them to a third party to run,” says Ong. “Often we’d return in the morning to find out the simulation crashed, so we’d have to reset it and try again the next evening. You can lose days or weeks, very quickly, if you’re doing complex modeling and you can’t run it and monitor it as it is running.”
To increase productivity and speed of innovation, Liquid Robotics now uses Nvidia Maximus technology, software that unites the power of Nvidia Quadro graphics processing units (GPUs) with the parallel-computing power of the new Nvidia Tesla C2075 companion processor to enable simultaneous 3D design, simulation, and visualization at the desktop.
“The real advantage of the Maximus technology is flexibility and increased productivity,” says Ong. “It’s a tremendous tool to allow my engineers to be flexible, to multitask, and to be more productive because they’re not waiting on computational power, period.”
Parallel tasks
For the engineers at Liquid Robotics, Maximus technology means not having to wait around for running simulations anymore. “We’re also very excited about the future of simulation, with newer software that will take advantage of the computational power of the Maximus technology,” says Ong. “Our design philosophy was always to build, prototype, test, iterate, and repeat over and over until we increased the performance and reliability of the Wave Glider vehicle to our satisfaction. Nvidia Maximus technology will enable a huge increase in our computational modeling capability, which will further accelerate our speed and efficiency.”
With the previous system, each engineer was only able to work on one software program on one computer at a time, so workflows involved passing things from engineer to engineer. Now, every Liquid Robotics engineer can work on multiple programs on a single Maximus-powered workstation. For instance, their 12-core workstation with 6 CPU cores plus an Nvidia Tesla companion processor can run Ansys Workbench, leaving the workstation’s other 6 CPU cores, along with its Quadro GPU, to run SolidWorks and other design programs.
“We have a limited number of engineers, so allowing each one to do multiple things at once is transformative for our workflow,” says Ong. “Now an engineer can design some mechanical components in SolidWorks while he’s also using the structures package of Ansys to do simulation. We never would have thought of doing this before.”
As a result of leveraging Maximus technology, Liquid Robotics spends less time to integrate customer-specific sensor payloads or to boost the Wave Glider platform’s performance, whether that entails more power, faster speeds, or enhanced communications capabilities.
“We spent multiple millions of dollars and years of research on the current Wave Glider,” says Ong. “Now, within just a few weeks, we can change the design to incrementally increase performance. When you reduce the time it takes to do the design work, you know the cost is going down as well.”